The Collaborative Software Lab (home of the Georgia Tech Squeakers, http://coweb.cc.gatech.edu/csl) has as its goal the creation of Collaborative Dynabooks. The Dynabook is what personal computers were designed for: The creation, storage, and playback of personal dynamic media to support learning and thought by anyone, from children to expert programmers. The Dynabook was the vision of Alan Kay which drove the creation of Smalltalk-80 and the desktop user interface by Alan, Dan Ingalls, Adele Goldberg, Ted Kaehler, and others at Xerox PARC in the 1970's. We often use the programming language Squeak (http://www.squeak.org) which is the the focus of the current research to create the Dynabook. Squeak is an open source effort, whose base is ViewPoints Research which Alan Kay directs, with involvement still from Kim Rose, Dan Ingalls, Ted Kaehler, John Maloney, Scott Wallace, and Andreas Raab further develop Squeak towards the Dynabook vision.
The goal of the Georgia Tech Squeakers is to realize Dynabooks as a collaborative medium, where groups of people can compose together, easily share one another's media, and can critique and review media.
If you're interested in any of these, contact Mark Guzdial at email@example.com or 404-894-5618.
The projects on this page are things that we're interested in seeing done that further this goal, in one of the below areas:
CS1315 Introduction to Media Computation is now a reality. The course website and planning site have lots of information in them about how the course was run, the learning objectives, papers written on the course, proposals we've written so-far, the technology we developed (including JES, the Jython Environment for Students), etc.
Now that we've run the class once, our list of things to fix, change, and enhance is long. Here are some of the projects I see here.
I've had various requests for bug fixes or enhancements to the help functions.
Spencer Rugaber and I have a new NSF-sponsored project to develop tools to support Open Source development. Our project, Ectropic Design (from the word "ectropy" meaning the opposite of "entropy" -- creating order out of chaos), is interested in trying to help open source software efforts succeed, even if there is no central coordinator/synthesizer/integrator, which exists on Open Source projects like Linux, Apache, and Squeak.
Our idea is to combine the collaboration tools of our group with Spencer's tools for software engineering. The challenge is that software engineering tools essentially call for people to make explicit things not in the code, e.g., goals, objectives, requirements, etc. How do we make the value of these things obvious enough that Open Source developers will actually use the tools.
We have a version of this now, called Ecode. We have data on its use. If you're interested in helping us analyze the data and/or design and build the next generation ECode, there are projects available!
Creating Media, not just Using It: Squeak lets you use lots of kinds of media: MPEG, MIDI, VRML/3DS, QuickTime, Flash, AIFF, WAV, JPEG, GIF, etc. However, we can only CREATE (of those) sound (AIFF AND WAV) and graphic (JPEG and GIF). We can't currently create ANY of these other formats.
There are two separate projects here:
Augmented Reality in Squeak: Blair MacIntyre really likes Squeak, but he doesn't have the time to invest in exploring Squeak since it doesn't really directly help his work in Augmented Reality (brief example: You look through goggles. You see the world around you, but overlaid on top of that is some kind of information, computer-generated, that is directly based on what you're looking at). The work he's doing in Augmented Reality is SOOOO up Squeak's alley! He's working with people in LCC (specifically, Jay Bolter) to build tools so that non-technical writers can build stories in Augmented Reality. For example, they did a great one where you look at historical buildings in the Sweet Auburn district and "ghosts" appear to tell you stories about the buildings! Wouldn't it be cool to do this in Squeak? Let me and Blair know if you'd like to work on this.
Supporting Gesture-Based Interfaces:
BACKGROUND MOTIVATION: I've been reading Janet Murray's "Hamlet on the Holodeck" where she describes how technology will impact the future of fiction. One of the features she talks about as improving interactive fiction is using gestures for actions rather than just mouse clicks or text commands. She talks about "pushing" something away with the mouse, or "pulling" something in to you.
Her comments got me thinking about why we DON'T have more gesture-oriented interactions in interfaces. One reason is that it's not part of most standard UI toolkits. Look at Morphic, as an example. We can catch mouseUp, mouseDown, mouseEnter, mouseDrag, but not a click on something and push it toward a target.
But we can only partially blame it on the UI builders -- we do know HOW to build gesture-recognition. Grafitti and similar recognizers are based on hidden Markov models (HMM's) that get trained up to recognize certain patterns. So, the second reason is that programmers (even UI builders) aren't using the existing technology. Why not?
One reason was explained by Chris Hoadley (for those who know Chris for his excellent work in CSCL, this work was done for his MS thesis as a CS student) of Berkeley at the last Empirical Studies of Programmer's (ESP) Workshop. Chris also noted that students rarely used library routines, preferring to build things from scratch. Just knowing names and arguments didn't help. He found that he had to give students a couple lines of description of what the routine was and how it worked. That leads to an insight as to why people don't use HMM's -- they're hard to understand, so programmers avoid them.
A POTENTIAL SOLUTION: How could we describe gestures in a way that even unsophisticated programmers could grok them? The simplest mathematical structure that I can think of that could be used to describe gestures are as a collection of vectors. If the action of clicking down and pushing could be described as a vector of such-and-such a length in a northeastern direction, the programmer could pretty quickly figure out what's in a northeastern direction and interpret that correctly.
Vectors alone only buy us so much. Generic vectors can still result in some pretty complicated collections for a gesture. We also want the "same" gesture to return the same/similar collection of gestures. (That's what those HMM's are for, after all.)
So, what if we limit the vectors to only coming in eight directions: N, NE, E, SE, S, SW, W, and NW? The UI system can map the gestures that come in into these directions, which will compress a whole bunch of similar gestures into a handful of vectors.
One could even imagine implementing something like Graffiti using a system like this. For example, the A symbol in Graffiti would map to two vectors: One going NE, one going SE. Size wouldn't matter here.
There are some algorithm complexities. Let's say that the user traces a circle. What you'd want to return are eight vectors, but it may be fairly complicated to decide that the user has "changed direction" when it's a continuous motion. By tracking "error" (at each iteration of polling the Sensor, how far off the user's direction was from the currently guessed "compass" direction), it should be possible to figure out when error has grown to the point that we should assume a second vector and not a continuation of the previous.
There are some interesting issues to work out with respect to events. What events should be send to the object that receives the initial mouseDown? vectorDown with a collection of vector objects? (Isn't this similar to how the CharRecognizer works now? Perhaps we can just extend that into Morphic?)
There are some neat additional issues to explore. What if you captured (and returned) the velocity of the vector (x0 - x1/t0 - t1)? What gestures can people do with this? Or with a pen, the velocity of the pressure?
To make a paper out of this we'd have to:
NOTE: There is now a handwriting recognizer built in to Squeak called Genie. Is it possible to build on top of Genie to make it easier for programmers?
Better Fonts: Squeak has terrific text support, for text that can flow through pipes and around irregular shapes, in real-time. Current Squeak's text uses "strikefonts" (bitmaps). There is developing support for TrueType fonts, but not throughout the system, and free and sharable TrueType fonts are hard to come by.
Comprehensive Retrieval Language: Squeak contains a version of a 3-D novice programming environment, called Wonderland by Jeff Pierce of CMU, based on Alice by Randy Pausch, Jeff, and others at CMU. Wonderland has a terrific model for end-user programming, where any action can be encapsulated in a script and easily combined to create larger elements. We'd like to try to use that kind of scripting model more broadly, with a retrieval language as a good example of this. Building a retrieval, you often combine this query with this filter and that filter. A Wonderland model fits well.
New musical timbres: Squeak has a powerful set of audio and music capabilities. Alan Kay and John Maloney would like to make "KORK-like timbres" that are compact sampling timbres with short wave buffers that use low-pass and other filtering to change the timbres. There are some great starts at http://swiki.cc.gatech.edu:8080/compMusic
A Movie/Play Authoring Tool: Imagine a stack of frames, where one frame is visible at a time (ala HyperCard). In the top of the frame, an image can be drawn or loaded from disk. In the bottom of the frame, sound tiles can be recorded and ordered in a space. When the movie or play is presented, each image frame is shown, and the sounds are played in order. What you get is a still-frame movie with recorded sound dialog. Could be done in Squeak cross-platform pretty easily. I HAVE ONE VERSION OF THIS done by a Senior Design team, but it's not robust and it's only be tested a litle bit with real users so-far.
Build a connectivity viewer This is a tool for programming Squeak -- the kind of thing that would tell you how several classes might be interrelated. Basically, tell me what other classes are referenced from a given class. Unleashed on the whole system, it might suggest a better way to categorize classes, and which methods could reasonably be considered to be private.
Speed-up Media Output: Squeak can generate AIF and WAV audio files, but it's slow. Squeak has a nice facility for generating C from Smalltalk, so that cross-platform low-level speedups can be done without leaving Squeak. Use these to improve the speed of these capabilities. (See http://guzdial.cc.gatech.edu:8080/personal.134 for more info.) Also, see Andy Greenburg's chapter on Slang at "http://coweb.cc.gatech.edu/squeakbook/"
I recently attended a project meeting at the Mellon Foundation, and I realized that distance education is COMING, and that EVERY higher ed institution is going to be trying to do it. The interesting question is how to do it right.
I started to think about how to do it in Squeak to teach programming, and I started to see how the pieces could work together:
How to do this? Well, it actually requires lots of smaller pieces of code. EACH OF THESE COULD BE A PROJECT FOR SOMEONE.
The tricky part is animations (e.g., flash, etc.). Let's think about it this way: How would you represent an animation in a book naturally? Typically, you'd create some key snapshots of the animation in process, then point out what's going on in each of those frame and snapshots. You'd say "Here's how it looks at the start" and "Then this happens" and "Then we end up like this." Here's my idea for how to work it all: Wrap an animation "frame" around an animation/frame/demo. Use the frame to take snapshots of the animation at various points and associate a caption with those snapshots.
The general ideas of Dynabooks certainly extend beyond Squeak, but it's not nearly as easy to do multimedia on other platforms.
But I think it's well worth while doing it! There is ample evidence that Computer Science classes tend to filter out women and other minorities, rather than include them. Part of the issue is that early CS classes tend to be rather boring. Yet, programs that make music or do graphical transitions or combine media are no more complex and work just as well for examples and homework as the searches, sorts, and data structure assignments that typically fill CS1 and CS2.
Programming Media in Squeak: We have an agenda these days in computer science education. We'd like to see "Hello,World!" go away as a first programming activity, in favor of more creative multimedia activities. (See editorial.) We've been exploring music, but we'd like to build some Squeak-based simple activities in:
Extending to our CS1/CS2 classes. I would like to see programs, for both DrScheme and Java, to:
Squeak has some new media formats that allow users to easily create dynamic media, e.g., drag-and-drop of animated, multimedia elements (called Morphs). Some of these are in book form (bookmorphs), while others are like a saved window of lots of interesting things that can be easily shared (shared project segments). These kinds of new media can be used for new genre of media, like Active Essays (See examples at http://swiki.cc.gatech.edu:8080/compMusic/ActiveEssays), where text, graphics, and equations are interspersed with dynamic displays allowing exploration, simulation, and testing of ideas. Making these media formats robust, making examples of these media, and distributing them are goals for the Squeakers.
Text-based Information Visualization. Information visualization is a hot field these days. It's about showing complex information (like the genes in DNA, or how a network is set up). It's an interesting challenge in use of screen space and providing navigation.
And I'm wondering how much of it is better done on Paper. Seriously! We know how to use Paper! Paper has AMAZING resolution!
Here's the challenge: Build a PAPER based information visualization tool (there are lots of tools to compare to) and use indices and table-of-contents to provide access without a navigation tool. Now evaluate it. It would make a wonderful research project!
Create CS Bookmorphs: Take your favorite CS idea (a particular sort, the way that memory paging works, a curve fitting algorithm) and make up an Active Essay to explain it. Use bookmorphs or project segments.
Build the Next Generation Emile: Emile (http://www.cc.gatech.edu/gvu/edtech/Emile.html) was my dissertation project: A simple environment for students to build physics simulations as a way of learning about programming and simulations. I've had various queries about where "Emile" is today. The answer: A bad engineering effort dies an ignoble death due to bugs and incompatibilities. There are better tools today! A new Emile could be made simply and cross-platform. Doing it in Morphic in Squeak would be great and not too hard. My new book has a chapter where I use the Morphic end-user scripting tools to build one physics simulation. It would be nice to build in the same kinds of notebooks that my Emile students had with bookmorphs so that each simulation was actually an Active Essay.
Spreadsheet Cells: Currently, we're lacking a TableMorph in Squeak, with real spreadsheet cells that can reference one another, maintain constraints, etc. Further, a SpreadsheetMorph that could support generalized retrieval and linking would be terrific. A good CellMorph that could contain other morphs would make for powerful, multimedia spreadsheets. A good SpreadsheetMorph could easily be shared in Squeak New Media (see below).
Outlining Everywhere: Michael Starke is a Squeak developer who has created a new outlining tool, Golgi, which he's already using for note-taking. He's also explored the use of Golgi for creating code, which is naturally hierarchical. We would like to finish this work: Make Golgi work for code, and for editing/exploring any hierarchical data structure. Allow us to use outlining for a wide variety of activities.
Faces and Speech in Squeak: Luciano Notafrancesco is a Squeak developer that has developed face animation and speech synthesis capabilities in Squeak in Linux. We'd like to move these into cross-platform Squeak and to use these in creating end-user modifiable new media tools.
People in Squeak: We'd like to be able to define a simple skeleton in 2D or 3D (using Thinglab, perhaps?). Give it rules for sitting, standing, walking, etc. Now, extend the Skeleton with skin supports, and wrap an outline on the supports. Add texture mapping for clothing. Combine it with faces and speech from above...
A Squeaky MOO: I'd like to have a MOO (a programmable text-based virtual reality) in Squeak. There are a couple of different things this might mean:
We have some of these from the Fall 2001 offering of CS2340. Anyone want to document one of the better ones and make it available for others to develop on? We've got servers to host the MOO from.
Combining Servers and Squeak: We have built a Telnet-based tool for interacting with external servers running high-end simulations or gathering process control data. We'd like the ability to (a) control these with Squeak's end-user scripting system and (b) present these data with Morphic-based visualization tools. For example, the Chemical Engineers have laboratory equipment that generate data. They'd love to easily be able to create tools to let whole classes of students grab and use this data. Similarly, they'd like to be able to share simulations running on high-end boxes across a whole class (e.g., pick up the data via Telnet, work with it inside a Squeak-based simulation/analysis tool.)
Multimedia authoring in Squeak: Squeak has some great multimedia capabilities now: Sampled sounds (recording and playback), MIDI, all kinds of graphical transformations, 3-D graphics. MPEG AND QUICKTIME VIDEO AND AUDIO NOW WORKS! Best yet, it works cross-platform robustly. (And is lots smaller than Java.) I'd like to have tools to make it easier for students to do authoring with this kind of power. For example, several years ago, I built a multimedia composition tool for Grade 6-12 students called MediaText (http://www.cc.gatech.edu/gvu/edtech/MediaText.html). It worked -- students and teachers could build multimedia documents in about 15 minutes with it. It won awards -- Parents magazine gave it a Gold Award, and Technology and Learning called it one of the top programs of 1992. But it died off commercially, and today, it would be obsolete because it doesn't do the Web. I think that a free version of MediaText that generated Web documents would be a really useful tool.
Here's an idea I'd love to explore with this: MediaText made text primary, with media hanging off of the text --- what if you let ANY medium be primary? Hang text and pictures to appear at intervals off an audio clip? Or off a video clip?
3-D Everything: Squeak has great 3-D support. Recently, there have been interesting experiments where people have mapped other objects to a 3-D space, like mapping UNIX processes to soldiers in Doom, then using shots on the soldier to re-nice and eventually delete a process. We'd like a framework to do this in Squeak, so that objects in Squeak could map to 3-D objects, and actions in the 3-D space could map to actions on the objects. (See http://guzdial.cc.gatech.edu:8080/personal.134 for more details.)
Build a protype-oriented Squeak. Currently, Squeak uses a class-based object system, but a prototype-based system has advantages for ease of development.
Actually, Hans-Marting Mosner did a cool experiment that is most of the hard work. Check out... http://www.heeg.de/~hmm/squeak/System-Prototypes.st
Port the ThingLab constraint system to Morphic. ThingLab was the first constraint-based programming system. It's been ported to Squeak, but runs in its own space. Making it run with all the multimedia Morphic stuff could be very clean and very powerful. Morphic would handle all the graphics, giving double-buffered color display for free, and allowing a lot of code to be discarded. Plus the result would have broken out of browser and window boundaries in the process. Then, extend ThingLab to 3-D. (see http://guzdial.cc.gatech.edu:8080/personal.134 for more info.)
Physical modelling sound synthesis. Rather than model the sounds, model the actual instruments that make the sounds: Very compact to store parameters, very good sounds, but maybe compute-intensive for real time synthesis in Squeak. Goal: Build a program that reads some standard PM file format and generates a waveform (non real-time synth). Then use that waveform as a LoopedSampledSound. Bonus: Write a Squeak synthesis algorithm that is real-time. A first version of this has been created by Luciano Notarfrancesco, but it doesn't read any standard PM file format.
Speech compression Write a speech codec that is at least as good as GSM (10:1 compression ratio) but is small and written in Squeak (open, cross-platform).
FABRIK FABRIK is a drag-and-drop programming space where users connect up sources and sinks to program. A start on making this work in Squeak is already in the base release. Start with Play-With-Me-6 and the Fabrik paper from OOPSLA. Follow-on project: Write a Java version that can run models built in Squeak-Fabrik. Follow-on-on project: Build a kernel set of widgets for Fabrik and then rebuild scrollbars, panes, menus, all as concretely assembled objects. Would be a dynamite user interface creation kit.
Morphic Model Clean up and enhance the Morphic Model of how to do UI. use it to build scrollbars, numeric and text entry fields. Merge this with FABRIK work to create a general dataflow programming tool.
Natural Language Parser A lot of fun experiments would be possible if Squeak had a parser for natural language (English, say). Write one. Spencer Rugaber has a very nice one in C that would work here, and Squeak can now talk to the Princeton Wordnet database to get definitions and parts-of-speech.
For Collaborative Dynabooks, we need to be able to share all of these new media easily.
A Galleries Squeaklet: I think everyone has seen the Student Curated Galleries at http://herring.cc.gatech.edu:8080/2cool/3186
Anybody game to tackle it?
Making Badges More Useful: I'd like to conduct office hours from home with Audio Chat in Squeak. We have audio recording, compression, and Sockets -- and now, with the new Badges, we have audio and text chat BUILT IN! But it's hard to use -- you need to know peoples' IP addresses. Could we have some way of looking people up, ala Buddy Lists?
Cross language sharing:I have a new NSF ITR grant with Michigan, Northwestern, U. Illinois-Chicago, and it's going to require us to support some kind of remote-procedure call from their code (mostly in Java) to our collaboration tools (of course, all in Squeak). What are our options?
The project is to be able to pass objects between Java and Squeak. Create a demo and documentation.
New Media Email: We need the ability to easily share externalized project segments via email -- Squeak-to-Squeak, with no intervening external email program. Let people share interactive, multimedia documents instead of text email.
New Media WWW: Instead of HTML pages, we need to be able to share new media project segments directly. They're easier to build, more dynamic, more powerful, and are already cross-platform. We've made a first pass at this (MuSwiki), and it's time for a second. It's getting real close -- Disney is actively moving toward a "Super Swiki," but they're planning on a dumb (FTP) server. How much can it be if we build our own server?
Spread Squeak Media through WWW: A Squeak plugin for Windows, Linux, and Macs has come out which allows Squeak to work inside a browser window. We'd like to use that to provide a stepping stone to Squeak-based media from WWW-based media. Design a WWW set of pages, or CoWeb/Swiki (http://pbl.cc.gatech.edu:8080/myswiki.1) that provides a path into Squeak media for Squeak novices.
Good applications help to draw users in. We're interested in making Dynabook creation be part of daily use of the computer. Towards that end, it's useful to have a set of applications that people find useful and can use for their daily life.
Demos of Pocket Smalltalk:Pocket Smalltalk has already been ported to Squeak! You can develop your application in Squeak, then download into PalmOS, GEOS, WinCE, or PocketPC. So, we could do the Embedded Systems class with Squeak for Palm Pilots already!
Pocket Smalltalk doesn't download the Squeak VM into these devices. Rather, they have their own, even tinier VM, and Squeak here just serves as the IDE. But you use all of Squeak's normal development tools, browsers, etc. (See http://www.pocketsmalltalk.com/new/2_0_alpha_-_squeak.htm)
For more information: http://www.pocketsmalltalk.com/new/
The project is to build some demos and tutorials so that we can use Pocket Smalltalk in Squeak classes and projects.
Web browsing: Squeak contains a good web browser, Scamper, but it's incomplete. We'd like to add table support, and links so that all of Squeak's existing media supports can be accessed through Scamper. For example, Squeak can handle Flash media, but Scamper doesn't currently interpret .swf and embed tags properly to utilize Flash.
Email: Squeak email reader, Celeste, could use an overhaul. It works, but it doesn't know about attachments, mailing lists, digests, etc.
Calendar: There are lots of calendar programs and handheld PDA calendar systems now. Squeak has no calendar system. One that would use the API of an existing system for usefulness in sharing, up/down-loading of calendars would be terrific.
Mail List Archive: I'd love a PWS/Comanche-based tool (http://minnow.cc.gatech.edu/swiki) for creating mail list archives. I host several mailing lists, and a Web-accessible archive is my most frequent request. Anthony Gelsomini built a great backend for this, but we still need a Web-based front-end for users.
File Reconciling : There are many times that I need to keep two folders on two different systems (both Macs, or one FTP and one Mac, etc.), where I want to copy back-and-forth only newly updated files, not the whole directory structure every time. I used to use an Apple tool for this, but it's recently broke. I'd like to do it from Squeak.
What are students doing in CoWebs?: I've done some research on how people use collaborative learning environments (see http://guzdial.cc.gatech.edu/papers/infoecol/ and also http://coweb.cc.gatech.edu/csl/Papers). I'd like to know some similar things about CoWeb use.
When does text win out over WYSIWYG?: The CoWeb ( http://minnow.cc.gatech.edu/swiki) is being used for a wide variety of uses, yet it's critiqued for being non-WYSIWYG. We use good ole text for creating links and editing. But I suspect that that is it's strength, because of its flexibility. I want someone to take some tasks that I see students doing in CoWebs (e.g., critiquing projects, working together on shared documents, responding to sample exam problems, creating sign-up sheets, carrying on semi-threaded discussions) and write down exactly how you'd do these in things like Netscape Communicator and Microsoft Front Page (step-by-step, what menu items, what buttons), and then how people do these in CoWeb. My bet: Fewer steps, less cognitive load in CoWeb.
How are students using the CoWeb Cases? The CS2390 CoWeb (http://pbl.cc.gatech.edu/cs2390/1.html) and CS2340 CoWebs ( http://coweb.cc.gatech.edu/cs2340) has over 100 cases in it, posted by students in previous quarters. Students do use these cases -- they talk about them, they post notes of thanks on them, they ask students about them. But how do they use them? Specifically:
Similarly, we don't know how students write these cases. Do they think about how the cases will be used, or just get them out there for the grade? What are the authors' models for how the cases will be used. There's an interesting ethnographic study to do here.
What are the educational applications for which a Pentium is no better than a 486?: The rapid pace of hardware development has far outpaced software development, though it's tried hard. Perhaps it has tried too hard. Educational technology researchers have kept up with software researchers, always trying the latest and greatest tools (e.g., everyone uses "Java" today, even though most computers in schools today don't have enough memory to run Java). But maybe there are some great applications that we missed, things that you could do with a good ole 486 that are just as great when using a Pentium -- and no greater!
Think about the exponential hardware zorch curve. We've gone from 1 MIPS to 100+ MIPS in no time at all! The software industry and research has closely matched that curve: When 486's came out, people built software that only ran on 486's. When Pentiums came out, people built software that only ran on Pentiums. Now think about that curve again. What's happened is that the software industry has rode that line of the curve -- but ignored that space under the curve! What can be done with that space? What are the cool ideas that were developed on the edge of the curve that can still be useful inside the curve?
In other words, I conjecture that there are educationally beneficial applications that do not require fast processors with lots of memory, and in fact, are not improved by fast processors with lots of memory. Identifying such applications is important because schools (at the very least, poorer schools) can't afford the latest and greatest. When creating new curricula and projects, it would be wonderful to be able to suggest uses of computers and know that even low-SES schools can take advantage of these uses. How low can we go? I have a TI-92 and an HP-48GX that cost about 10% of a good computer, and they're amazingly powerful.
The limits of this project are complicated. Do we define the baseline system in terms of cost? (That limit will inch upward over time.) Do we start at a 486 or something that students have a lot of -- Apple IIs? Do we define the baseline system in terms of theoretical capability? In terms of MIPS and RAM? And how do we describe the class of "educationally beneficial applications"?