Modr 1760 Module 1 Homework 607

INTERACTIVE TOY FIELD OF THE INVENTION

The present invention relates to computer systems and methods generally and more particularly to development of interactive constructs, to techniques for teaching such development, and to verbally interactive toys .

BACKGROUND OF THE INVENTION

Various types of verbally interactive toys are known in the art. Generally speaking, these toys may be divided into two categories, computer games and stand-alone toys. The stand-alone toys, which typically have electronic circuitry embedded therein, normally provide a relatively low level of speech recognition and a very limited vocabulary, which often lead to child boredom and frustration during play.

Computer games enjoy the benefit of substantial computing power and thus can provide a high level of speech recognition and user satisfaction. They are characterized by being virtual in their non-verbal dimensions and thus lack the capacity of bonding with children.

The following patents are believed to represent the state of the art in verbally interactive toys :

US Patent 4,712,184 to Haugerud describes a computer controlled educational toy, the construction of which teaches the user computer terminology and programming and robotic technology. Haugerud describes computer control of a toy via a wired connection, wherein the user of the computer typically writes a simple program to control movement of a robot.

US Patent 4,840,602 to Rose describes a talking doll responsive to an external signal, in which the doll has a vocabulary stored in digital data in a memory which may be accessed to cause a speech synthesizer in the doll to simulate speech.

US Patent 5,021,878 to Lang describes an animated character system with real-time control.

US Patent 5,142,803 to Lang describes an animated character system with real-time control.

US Patent 5,191,615 to Aldava et al. describes an interrelational audio kinetic entertainment system in which movable and audible toys and other animated devices spaced apart from a television screen are provided with program synchronized audio and control data to interact with the program viewer in relationship to the television program.

US Patent 5,195,920 to Collier describes a radio controlled toy vehicle which generates realistic sound effects on board the vehicle. Communications with a remote computer allows an operator to modify and add new sound effects.

US Patent 5,270,480 to Hikawa describes a toy acting in response to a MIDI signal, wherein an instrument-playing toy performs simulated instrument playing movements.

US Patent 5,289,273 to Lang describes a system for remotely controlling an animated character. The system uses radio signals to transfer audio, video and other control signals to the animated character to provide speech, hearing vision and movement in real-time.

US Patent 5,388,493 describes a system for a housing for a vertical dual keyboard MIDI wireless controller for accor- dionists . The system may be used with either a conventional MIDI cable connection or by a wireless MIDI transmission system.

German Patent DE 3009-040 to Neuhierl describes a device for adding the capability to transmit sound from a remote control to a controlled model vehicle. The sound is generated by means of a microphone or a tape recorder and transmitted to the controlled model vehicle by means of radio communications. The model vehicle is equipped with a speaker that emits the received sounds .

The disclosures of all publications mentioned in the specification and of the publications cited therein are hereby incorporated by reference.

SUMMARY OF THE INVENTION The present invention seeks to provide verbally interactive toys and methods thereto which overcome disadvantages of the prior art as described hereinabove.

There is thus provided in accordance with a preferred embodiment of the present invention interactive toy apparatus including a toy having a fanciful physical appearance, a speaker mounted on the toy, a user input receiver, a user information storage unit storing information relating to at least one user a content controller operative in response to current user inputs received via the user input receiver and to information stored in the storage unit for providing audio content to the user via the speaker.

Further in accordance with a preferred embodiment of the present invention the user input receiver includes an audio receiver.

Still further in accordance with a preferred embodiment of the present invention the current user input includes a verbal input received via the audio receiver.

Additionally in accordance with a preferred embodiment of the present invention the user input receiver includes a tactile input receiver.

Moreover in accordance with a preferred embodiment of the present invention the storage unit stores personal information relating to at least one user and the content controller is operative to personalize the audio content.

Further in accordance with a preferred embodiment of the present invention the storage unit stores information relating to the interaction of at least one user with the interactive toy apparatus and the content controller is operative to control the audio content in accordance with stored information relating to past interaction of the at least one user with the interactive toy apparatus.

Still further in accordance with a preferred embodiment of" the present invention the storage unit also stores information relating to the interaction of at least one user with the interactive toy apparatus and the content controller also is operative to control the audio content in accordance with information relating to past interaction of the at least one user with the interactive toy apparatus.

Additionally in accordance with a preferred embodiment of the present invention the storage unit stores information input verbally by a user via the user input receiver.

Moreover in accordance with a preferred embodiment of the present invention the storage unit stores information input verbally by a user via the user input receiver.

Further in accordance with a preferred embodiment of the present invention the storage unit stores information input verbally by a user via the user input receiver.

Still further in accordance with a preferred embodiment of the present invention the interactive toy apparatus also includes a content storage unit storing audio contents of at least one content title to be played to a user via the speaker, the at least one content title being interactive and containing interactive branching. Additionally in accordance with a preferred embodiment of the present invention the at least one content title includes a plurality of audio files storing a corresponding plurality of content title sections including at least one two alternative content title sections, and a script defining branching between the alternative user sections in response to any of a user input, an environmental condition, a past interaction, personal information related to a user, a remote computer, and a time-related condition.

Moreover in accordance with a preferred embodiment of the present invention the interactive toy apparatus also includes a content storage unit storing audio contents of at least one content title to be played to a user via the speaker, the at least one content title being interactive and containing interactive branching.

Further in accordance with a preferred embodiment of the present invention the at least one content title includes a plurality of parallel sections of content elements including at least two alternative sections and a script defining branching between alternative sections in a personalized manner.

Still further in accordance with a preferred embodiment of the present invention the user information storage unit is located at least partially in the toy.

Additionally in accordance with a preferred embodiment of the present invention the user information storage unit is located at least partially outside the toy.

Moreover in accordance with a preferred embodiment of the present invention the content storage unit is located at least partially in the toy.

Further in accordance with a preferred embodiment of the present invention the content storage unit is located at least partially outside the toy.

Still further in accordance with a preferred embodiment of the present invention the user input receiver includes a microphone mounted on the toy, and a speech recognition unit receiving a speech input from the microphone.

Additionally in accordance with a preferred embodiment of the present invention the user information storage unit is operative to store the personal information related to a plurality of users each identifiable with a unique code and the content controller is operative to prompt any of the users to provide the user's code.

Moreover in accordance with a preferred embodiment of the present invention the user information storage unit is operative to store information regarding a user ' s participation performance .

There is also provided in accordance with a preferred embodiment of the present invention toy apparatus having changing facial expressions, the toy including multi-featured face apparatus including a plurality of multi-positionable facial features, and a facial expression control unit operative to generate at least three combinations of positions of the plurality of facial features representing at least two corresponding facial expressions.

Further in accordance with a preferred embodiment of the present invention the facial expression control unit is operative to cause the features to fluctuate between positions at different rates, thereby to generate an illusion of different emotions .

Still further in accordance with a preferred embodiment of the present invention the toy apparatus also includes a speaker device, an audio memory storing an audio pronouncement, and an audio output unit operative to control output of the audio pronouncement by the speaker device, and the facial expression control unit is operative to generate the combinations of positions synchronously with output of the pronouncement.

There is also provided in accordance with a preferred embodiment of the present invention toy apparatus for playing an interactive verbal game including a toy, a speaker device mounted on the toy, a microphone mounted on the toy, a speech recognition unit receiving a speech input from the microphone, and an audio storage unit storing a multiplicity of verbal game segments to be played through the speaker device, and a script storage defining interactive branching between the verbal game segments.

Further in accordance with a preferred embodiment of the present invention the verbal game segments include at least one segment which prompts a user to generate a spoken input to the verbal game.

Still further in accordance with a preferred embodiment of the present invention the at least one segment includes two or more verbal strings and a prompt to the user to reproduce one of the verbal strings. Additionally in accordance with a preferred embodiment of the present invention the at least one segment includes a riddle.

Moreover in accordance with a preferred embodiment of the present invention the at least one of the verbal strings has educational content.

Further in accordance with a preferred embodiment of the present invention the at least one of the verbal strings includes a feedback to the user regarding the quality of the user's performance in the game.

Still further in accordance with a preferred embodiment of the present invention the interactive toy apparatus further includes multi-featured face apparatus assembled with the toy including a plurality of multi-positionable facial features, and a facial expression control unit operative to generate at least three combinations of positions of the plurality of facial features representing at least two corresponding facial expressions.

Additionally in accordance with a preferred embodiment of the present invention the facial expression control unit is operative to cause the features to fluctuate between positions at different rates, thereby to generate an illusion of different emotions .

Moreover in accordance with a preferred embodiment of the present invention the interactive toy apparatus also includes an audio memory storing an audio pronouncement, and an audio output unit operative to control output of the audio pronouncement by the speaker device, and the facial expression control unit is operative to generate the combinations of positions synchronously with output of the pronouncement.

Further in accordance with a preferred embodiment of the present invention the interactive toy apparatus further includes a microphone mounted on the toy, a speech recognition unit receiving a speech input from the microphone, and an audio storage unit storing a multiplicity of verbal game segments of a verbal game to be played through the speaker device, and a script storage defining interactive branching between the verbal game segments .

Still further in accordance with a preferred embodiment of the present invention the verbal game segments include at least one segment which prompts a user to generate a spoken input to the verbal game.

Additionally in accordance with a preferred embodiment of the present invention the at least one segment includes two or more verbal strings and a prompt to the user to reproduce one of the verbal strings.

Moreover in accordance with a preferred embodiment of the present invention the at least one segment includes a riddle.

Further in accordance with a preferred embodiment of the present invention the at least one of the verbal strings has educational content.

Still further in accordance with a preferred embodiment of the present invention and further including a microphone mounted on the toy; a speech recognition unit receiving a speech input from the microphone, and an audio storage unit storing a multiplicity of verbal game segments of a verbal game to be

10 played through the speaker device and a script storage defining interactive branching between the verbal game segments.

Moreover in accordance with a preferred embodiment of the present invention the verbal game segments include at least one segment which prompts a user to generate a spoken input to the verbal game.

Additionally in accordance with a preferred embodiment o£ the present invention wherein at least one segment includes two or more verbal strings and a prompt to the user to reproduce one of the verbal strings. Additionally or alternatively at least one segment comprises a riddle.

Still further in accordance with a preferred embodiment of the present invention at least one of the verbal strings has educational content.

Additionally in accordance with a preferred embodiment of the present invention the at least one of the verbal strings includes a feedback to the user regarding the quality of the user's performance in the game.

There is also provided in accordance with a preferred embodiment of the present invention a method of toy interaction including providing a toy having a fanciful physical appearance, providing a speaker mounted on the toy, providing a user input receiver, storing in a user information storage unit information relating to at least one user providing, via a content controller operative in response to current user inputs received via the user input receiver and to information stored in the storage unit, audio content to the user via the speaker.

Further in accordance with a preferred embodiment of

11 the present invention the storing step includes storing personal information relating to at least one user and personalizing, via the content controller, the audio content.

Still further in accordance with a preferred embodiment of the present invention the storing step includes storing information relating to the interaction of at least one user with the interactive toy apparatus and controlling, via the content controller, the audio content in accordance with stored information relating to past interaction of the at least one user with the interactive toy apparatus.

Additionally in accordance with a preferred embodiment of the present invention the method further includes storing, in a content storage unit, audio contents of at least one content title to be played to a user via the speaker, the at least one content title being interactive and containing interactive branching.

Moreover in accordance with a preferred embodiment of the present invention the method further includes storing personal information related to a plurality of users each identifiable with a unique code and prompting, via the content controller, any of the users to provide the user's code.

Further in accordance with a preferred embodiment of the present invention the method further includes storing information regarding a user's participation performance.

Still further in accordance with a preferred embodiment of the present invention the method further includes providing multi-featured face apparatus including a plurality of multi-

12 positionable facial features, and generating at least three combinations of positions of the plurality of facial features representing at least two corresponding facial expressions.

Additionally in accordance with a preferred embodiment of the present invention the method further includes causing the features to fluctuate between positions at different rates, thereby to generate an illusion of different emotions.

Moreover in accordance with a preferred embodiment of the present invention the method also includes storing an audio pronouncement, and providing the audio pronouncement by the speaker, and generating combinations of facial positions synchronously with output of the pronouncement.

There is also provided, in accordance with a preferred embodiment of the present invention, a system for teaching programming to students, such as school-children, using interactive objects, the system including a computerized student interface permitting a student to breathe life into an interactive object by defining characteristics of the interactive object, the computerized student interface be being operative to at least partially define, in response to student inputs, interactions between the interactive object and humans; and a computerized teacher interface permitting a teacher to monitor the student ' s progress in defining characteristics of the interactive object.

Further in accordance with a preferred embodiment of the present invention, the computerized teacher interface permits the teacher to configure the computerized student interface.

Also provided, in accordance with a preferred embodiment of the present invention, is a teaching system for teaching

13 engineering and programming of interactive objects to students, the system including a computerized student interface permitting a student to breathe life into an interactive object by defining characteristics of the interactive object, the computerized user interface being operative to at least partially define, in response to student inputs, interactions between the interactive object and humans, and a computerized teacher interface permitting a teacher to configure the computerized student interface.

Also provided, in accordance with another preferred embodiment of the present invention, is a computer system for development of emotionally perceptive computerized creatures including a computerized user interface permitting a user to develop an emotionally perceptive computer-controlled creature by defining interactions between the emotionally perceptive computer-controlled creature and natural humans including at least one response of the emotionally perceptive computer-controlled creature to at least one parameter, indicative of natural human emotion, derived from a stimulus provided by the natural human and a creature control unit operative to control the emotionally perceptive creature in accordance with the characteristics and interactions defined by the user.

Further in accordance with a preferred embodiment of the present invention, the parameter indicative of natural human emotion includes a characteristic of natural human speech other than language content thereof.

Also provided, in accordance with a preferred embodiment of the present invention, is a method for development of

14 emotionally perceptive computerized creatures, the method including defining interactions between the emotionally perceptive computer-controlled creature and natural humans including at least one response of the emotionally perceptive computer-controlled creature to at least one parameter, indicative of natural human emotion, derived from a stimulus provided by the natural human, and controlling the emotionally perceptive creature in accordance with the characteristics and interactions defined by the user.

Additionally provided, in accordance with a preferred embodiment of the present invention, is a method for teaching programming to school-children, the method including providing a computerized visual-programming based school-child interface permitting a school-child to perform visual programming and providing a computerized teacher interface permitting a teacher to configure the computerized school-child interface.

Also provided is a computerized emotionally perceptive computerized creature including a plurality of interaction modes operative to carry out a corresponding plurality of interactions with natural humans including at least one response to at least one natural human emotion parameter, indicative of natural human emotion and an emotion perception unit operative to derive at least one natural human emotion parameter from a stimulus provided by the natural human, and to supply the parameter to at least one of the plurality of interaction modes, and, optionally, a physical or virtual, e.g. on-screen, body operative to participate in at least one of the plurality of interactions.

15 BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:

Fig. 1A is a simplified pictorial illustration of a toy forming at least part of an interactive toy system constructed and operative in accordance with a preferred embodiment of the present invention;

Fig. IB is a back view of the toy of Fig. 1;

Fig. 2 is a partially cut away pictorial illustration of the toy of Figs. 1A and IB;

Fig. 3 is a simplified exploded illustration of elements of the toy of Figs. 1A, IB, and 2;

Figs. 4A, 4B, 4C, 4D and 4E are illustrations of the toy of Figs. 1A - 3 indicating variations in facial expressions thereof;

Fig. 5 is a simplified block diagram illustration of the interactive toy apparatus of a preferred embodiment of the present invention;

Fig. 6 is a functional block diagram of a base station forming part of the apparatus of Fig. 5;

Fig. 7 is a functional block diagram of a circuitry embedded in a toy forming part of the apparatus of Fig. 5;

Figs. 8A - 8G, taken together, comprise a schematic diagram of base communication unit 62 of Fig. 5;

Figs. 8H - 8N, taken together, comprise a schematic diagram of base communication unit 62 of Fig. 5, according to an

16 alternative embodiment;

Figs. 9A - 9G, taken together, comprise a schematic diagram of toy control device 24 of Fig. 5;

Figs. 9H - 9M, taken together, comprise a schematic diagram of toy control device 24 of Fig. 5, according to an alternative embodiment;

Figs. 10 - 15, taken together, are simplified flowchart illustrations of a preferred method of operation of the interactive toy system of Figs. 1 - 9G;

Figs. 16A and 16B, taken together, form a simplified operational flow chart of one possible implementation of the opening actions of a script executed by the "Play" sub-module of Fig. 10;

Figs. 17A - 17E, taken together, form a simplified operational flow chart of one possible implementation of a story script executed by the "Play" sub-module of Fig. 10;

Figs. 18A - 18G, taken together, form a simplified operational flow chart of one possible implementation of a game script executed by the "Play" sub-module of Fig. 10;

Figs. 19A - 19C, taken together, form a simplified operational flow chart of one possible implementation of a song script executed by the "Play" sub-module of Fig. 10;

Figs. 20A - 20C, taken together, form a simplified operational flow chart of one possible implementation of the "Bunny Short" story script of Figs. 17A - 17E and executed by the "Play" sub-module of Fig. 10;

Figs. 21A - 2IF, taken together, form a simplified

17 operational flow chart of one possible implementation of the "Bunny Long" story script of Figs. 17A - 17E and executed by the "Play" sub-module of Fig. 10;

Fig. 22 is a simplified operational flow chart of the "Theme Section" referred to in Figs. 17D, 18C, 19B, and 19C;

Fig. 23A is a pictorial illustration of the development and operation of a physical toy living creature in accordance with a preferred embodiment of the present invention;

Fig. 23B is a pictorial illustration of the development and operation of a virtual living creature in accordance with a preferred embodiment of the present invention;

Fig. 23C is a simplified semi-pictorial semi-block diagram illustration of a system which is a variation on the systems of Figs. 23A - 23B in that a remote content server is provided which serves data, programs, voice files and other contents useful in breathing life into a computerized living creature;

Fig. 24A is a pictorial illustration of a school-child programming a computerized living creature;

Fig. 24B is a pictorial illustration of human, at least verbal interaction with a computerized living creature wherein the interaction was programmed by a student as described above with reference to Fig. 24A;

Figure 24C is a pictorial illustration of a creature equipped with a built in video camera and a video display such as a liquid crystal display (LCD);

Fig. 25 is a simplified software design diagram of preferred functionality of a system administrator;

18 Fig. 26 is a simplified software diagram of preferred functionality of teacher workstation 312 in a system for teaching development of interactive computerized constructs such as the system of Figs. 23A - 23C;

Fig. 27 is a simplified software diagram of preferred functionality of student workstation 10 in a system for teaching development of interactive computerized constructs such as the system of Figs. 23A - 23C;

Figs. 28 - 31 are examples of screen displays which are part of a human interface for the Visual Programming block 840;

Fig. 32 is a screen display which includes an illustration of an example of a state machine view of a project;

Fig. 33 is a screen display which enables a student to create an environment in which a previously generated module can be tested;

Figs. 34 - 37 are examples of display screens presented by the teacher workstation 312 of any of Figs. 23A, 23B or 23C;

Fig. 38 is a simplified flowchart illustration of the process by which the student typically uses the student workstation of any of Figs. 23A, 23B or 23C;

Fig. 39 is an example of a display screen generated by selecting Event in the Insert menu in the student workstation 310;

Fig. 40 is an example of a display screen generated by selecting Function in the Insert menu in the student workstation 310;

Fig. 41 is a simplified flowchart illustration of

19 processes performed by the student in the course of performing steps 910 and 920 of Fig. 38;

Fig. 42 is a simplified flowchart illustration of an emotional interaction flowchart design process;

Figs. 43 - 102 illustrate preferred embodiments of a computerized programming teaching system constructed and operative in accordance with a preferred embodiment of the present invention.

Fig. 103 is a table illustration of an emotional analysis database;

Fig. 104 is an emotional analysis state chart;

Fig. 105 illustrates typical function calls and callback notifications;

Fig. 106 illustrates typical input data processing suitable for a media BIOS module;

Fig. 107 illustrates typical input data processing suitable for a UCP implementation module;

Fig. 108 illustrates typical data processing suitable for user applications and an API module;

Fig. 109 illustrates a typical UCP implementation module and media BIOS output data processing;

Fig. 110 illustrates output data processing for a protocol implementation module and media BIOS module;

Fig. Ill illustrates typical figure configuration; and

Figs. 112 - 115 illustrate typical install-check up (BT 1/4, 2/4, 3/4 and 4/4 respectively).

Attached herewith are the following appendices which aid in the understanding and appreciation of one preferred embod-

20 iment of the invention shown and described herein:

Appendix A is a computer listing of a preferred software implementation of the interactive toy system of the present invention;

Appendix B is a preferred parts list for the apparatus of Figs. 8A - 8G; and

Appendix C is a preferred parts list for the apparatus of Figs. 9A - 9G.

21 DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Reference is now made to Fig. 1A which is a simplified pictorial illustration of a toy, generally designated 10, forming at least part of an interactive toy system constructed and operative in accordance with a preferred embodiment of the present invention. While toy 10 may be implemented in any number of physical configurations and still maintain the functionality of an interactive toy system as is described herein, for illustration purposes only toy 10 is shown in Fig. 1A as typically having a fanciful physical appearance and comprising a body portion 12 having a number of appendages, such as arms 14, legs 16, eyelids 17, eyes 18, a nose 19, and a mouth 20. Arms 14 and legs 16 may be passive "appendages" in that they are not configured to move, while eyelids 17, eyes 18 and mouth 20 may be "active" appendages in that they are configured to move as is described in greater detail hereinbelow with reference to Figs. 3 - 4E.

Fig. IB is a back view of the toy of Fig. 1 and additionally shows toy 10 as typically having an apertured area 22, behind which a speaker may be mounted as will be described in greater detail hereinbelow.

Fig. 2 is a partially cut away pictorial illustration of the toy of Figs. 1A and IB showing a toy control device 24, typically housed within body potion 12, and a number of user input receivers, such as switches 26 in arms 14 and legs 16 for receiving tactile user inputs, and a microphone 28 for receiving audio user inputs. It is appreciated that the various user input receivers described herein may be located anywhere within toy 10,

22 such as behind nose 19, provided that they may be accessed by a tactile or audio user input, such as verbal input, as required.

It is appreciated any of a multitude of known sensors and input devices, such as accelerometers, orientation sensors, proximity sensors, temperature sensors, video input devices, etc., although not particularly shown, may be incorporated into toy 10 for receiving inputs or other stimuli for incorporation into the interactive environment as described herein regarding the interactive toy system of the present invention.

Additional reference is now made to Fig. 3 which is a simplified exploded illustration of elements of the toy 10 of Figs. 1A, IB, and 2. A facial portion 30 of body portion 12 of Fig. 1 is shown together with nose 19 and mouth 20, and having two apertures 32 for receiving eyelids 17 and eyes 18. Facial portion 30 typically sits atop a protective cover 34 which is mounted on a protective box 36. Eyelids 17, eyes 18, and mouth 20 each typically cooperate with a motion element 38 which provides a movement to each appendage. Motion elements 38 are typically driven by a gear plate 40 which is in turn controlled by a gear shaft 42 and a motor 44. Circuitry 24 effects a desired movement of a specific appendage via a corresponding motion element 38 by controlling motor 44 and gear shaft 42 to orient and move gear plate 40 depending on the desired rotational orientation of gear plate 40 relative to the current rotational orientation as determined by an optical positioning device 46. Gear plate 40 preferably selectably cooperates with a single one of motion elements 38 at a time depending on specific rotational orientations of gear plate 40. A speaker 58 is also provided for

23 audio output. Power is typically provided by a power source 48, typically a DC power source.

Figs. 4A, 4B, 4C, 4D and 4E are illustrations of toy 10 of Figs. 1A - 3 indicating variations in facial expressions thereof. Fig. 4A shows eyes 18 moving in the direction indicated by an arrow 50, while Fig. 4B shows eyes 18 moving in the direction indicated by an arrow 52. Fig. 4C shows eyelids 17 having moved to a half-shut position, while Fig. 4D shows eyelids 17 completely shut. Fig. 4E shows the lips of mouth 20 moving in the directions indicated by an arrow 54 and an arrow 56. It is appreciated that one or both lips of mouth 20 may move.

Reference is now made to Fig. 5 which is a simplified block diagram illustration of the interactive toy apparatus constructed and operative in accordance with a preferred embodiment of the present invention. Typically, a computer 60, such as a personal computer based on the PENTIUM microprocessor from Intel Corporation, is provided in communication with a base communication unit 62, typically a radio-based unit, via a RS-232 serial communications port. It is appreciated that communication between the computer 60 and the base unit 62 may be effected via parallel port, MIDI and audio ports of a sound card, a USB port, or any known communications port. Unit 62 is typically powered by a power supply 64 which may be fed by an AC power source. Unit 62 preferably includes an antenna 66 for communication with toy control device 24 of toy 10 (Fig. 2) which is similarly equipped with an antenna 68. Toy control device 24 typically controls motor 44 (Fig. 3), switches 26 (Fig. 2), one or more

24 movement sensors 70 for detecting motion of toy 10, microphone 28 (Fig. 2), and speaker 58 (Fig. 3). Any of the elements 24, 44, 26, 28, 58 and 70 may be powered by power source 48 (Fig. 3).

Computer 60 typically provides user information storage, such as on a hard disk or any known and preferably nonvolatile storage medium, for storing information relating to a user, such as personal information including the user's name, a unique user code alternatively termed herein as a "secret name" that may be a made-up or other fanciful name for the user, typically predefined and selected by the user, the age of the user, etc.

Computer 60 also acts as what is referred to herein as a "content controller" in that it identifies the user interacting with toy 10 and controls the selection and output of content via toy 10, such as via the speaker 58 as is described in greater detail hereinbelow. The content controller may utilize the information relating to a user to personalize the audio content delivered to the user, such as by referring to the user with the user ' s secret name or speaking in a manner that is appropriate to the gender of the user. Computer 60 also typically provides content storage for storing content titles each comprising one or more content elements used in response to user inputs received via the user input receivers described above with reference to toy 10, in response to environmental inputs, or at random. For example, a content title may be a joke, a riddle, or an interactive story. An interactive story may contain many content elements, such as audio elements, generally arranged in a script for sequential output. The interactive story is typically divided

25 into several sections of content element sequences, with multiple sections arranged in parallel to represent alternative interactive branches at each point in the story. The content controller selects a branch according to a current user input with toy 10, previous branch selections, or other user information such as past interactions, preferences, gender, or environmental or temporal conditions, etc.

Computer 60 may be in communication with one or more other computers, such as a remote computer by various known means such as by fixed or dial-up connection to a BBS or to the Internet. Computer 60 may download from the remote server, either in real-time or in a background or batch process, various types of content information such as entirely new content titles, additional sections or content elements for existing titles such as scripts and voice files, general information such as weather information and advertisements, and educational material. Information downloaded from a remote computer may be previously customized for a specific user such as by age, user location, purchase habits, educational level, and existing user credit.

The content controller may also record and store user information received from a user via a user input receiver such as verbal or other audio user inputs. Computer 60 preferably includes speech recognition capabilities, typically implemented in hardware and/or software, such as the Automatic Speech Recognition Software Development Kit for WINDOWS 95 version 3.0, commercially available from Lernout & Hauspie Speech Products, Sint-Krispisnstraat 7, 8900 Leper, Belgium. Speech recognition

26 may be used by the content controller to analyze speech inputs from a user to determine user selections, such as in connection with an interactive story for selecting a story branch. Speech recognition may also be used by the content controller to identify a user by the secret name or code spoken by the user and received by microphone 28.

The content controller also provides facial expression control. The facial mechanism (Fig. 5) may provide complex dynamic facial expressions by causing the facial features to fluctuate between various positions at different rates. Preferably, each facial feature has at least two positions that it may assume. Two or more facial features may be moved into various positions at generally the same time and at various rates in order to provide a variety of facial expression combinations to generate a variety different emotions. Preferably, the content controller controls the facial feature combinations concurrent with a user interaction or a content output to provide a natural accompanying expression such as lip synchronization and natural eye movements.

The content controller preferably logs information relating to content provided to users and to the interactions between each user and toy 10, such as the specific jokes and songs told and sung to each user, user responses and selections to prompts such as questions or riddles or interactive stories, and other user inputs . The content may utilize the information relating to these past interactions of each user to subsequently select and output content and otherwise control toy 10 as appropriate, such as play games with a user that were not previously

27 played with that user or affect the level of complexity of an interaction.

It is appreciated that computer 60 may be housed within or otherwise physically assembled with toy 10 in a manner in which computer 60 communicates directly with toy control device 24 not via base unit 62 and antennae 66 and 68, such as through wired means or optical wireless communications methods. Alternatively, computer 60 may be electronically integrated with toy control device 24.

Fig. 6 is a functional block diagram of base communication unit 62 of Fig. 5. Unit 62 typically comprises a micro controller unit 72 having a memory 74. Unit 72 communicates with computer 60 of Fig. 5 via an adapter 76, typically connected to computer 60 via an RS-232 port or otherwise as described above with reference to Fig. 5. Unit 72 communicates with toy control device 24 of toy 10 (Fig. 2) via a transceiver 78, typically a radio transceiver, and antenna 66.

Fig. 7 is a functional block diagram of toy control device 24 of Fig. 5. Device 24 typically comprises a micro controller unit 82 which communicates with base unit 72 of Fig. 5 via a transceiver 84, typically a radio transceiver, and antenna 68. Power is supplied by a power supply 86 which may be fed by power source 48 (Fig. 5). Unit 82 preferably controls and/or receives inputs from a toy interface module 88 which in turn controls and/or receives inputs from the speaker, microphone, sensors, and motors as described hereinabove. Transceiver 84 may additionally or alternatively communicate with interface module

28 88 for direct communication of microphone inputs and speaker outputs .

Reference is now made to Figs. 8A - 8G, which, taken together, comprise a schematic diagram of base communication unit 62 of Fig. 5. Appendix B is a preferred parts list for the apparatus of Figs. 8A - 8G.

Figs. 8H - 8N, taken together, comprise a schematic diagram of base communication unit 62 of Fig. 5, according to an alternative embodiment.

Reference is now made to Figs. 9A - 9G which, taken together, comprise a schematic diagram of toy control device 24 of Fig. 5. Appendix C is a preferred parts list for the apparatus of Figs. 9A - 9G.

Figs. 9H - 9M, taken together, comprise a schematic diagram of toy control device 24 of Fig. 5, according to an alternative embodiment.

Reference is now made to Figs. 10 - 15 which, taken together, are simplified flowchart illustrations of a preferred method of operation of the interactive toy system of Figs. 1 9G. It is appreciated that the method of Figs. 10 - 15 may be implemented partly in computer hardware and partly in software, or entirely in custom hardware. Preferably, the method of Figs. 10 - 15 is implemented as software instructions executed by computer 60 (Fig. 5). It is appreciated that the method of Figs. 10 - 15, as well as other methods described hereinbelow, need not necessarily be performed in a particular order, and that in fact, for reasons of implementation, a particular implementation of the methods may be performed in a different order than another par-

29 ticular implementation.

Fig. 10 describes the main module of the software and high-level components thereof. Operation typically begins by opening the communication port to the base unit 62 and initiating communication between computer 60 and toy control device 24 via base unit 62. The main module also initiates a speech recognition engine and displays, typically via a display of computer 60, the main menu of the program for selecting various sub-modules. The main module typically comprises the following sub-modules :

1) "About You" is a sub-module that enables a user to configure the system to the users preferences by entering parameters such as the users real name, secret name, age and date of birth, color of the hair and eyes, gender, and typical bed-time and wake-up hours;

2) "Sing Along" is another sub-module that provides specific content such as songs with which the user may sing along;

3) "How To Play" is a sub-module tutorial that teaches the user how to use the system and play with the toy 10;

4) "Play" is the sub-module that provides the interactive content to the toy 10 and directs toy 10 to interact with the user;

5) "Toy Check-Up" is a sub-module that helps the user to solve technical problems associated with the operation of the system, such as the toy having low battery power and lack of sufficient electrical power supply to the base station; and

6) "Exit" is a sub-module that enables the user to

30 cease the operation of the interactive toy system software and clear it from the computers memory.

Fig. 11 shows a preferred implementation of the "open communication" step of Fig. 10 in greater detail. Typical operation begins with initialization of typical system parameters such as setting up the access to the file system of various storage units. The operation continues by loading the display elements, opening the database, initializing the toy and the communication drivers, initializing the speech recognition software engine, and creating separate threads for various concurrently-operating activities such that one user may interact with the toy while another user may use the computer screen and keyboard for other purposes, such as for word processing.

Fig. 12 shows a preferred implementation of the "About You" sub-module of Fig. 10 in greater detail. Typical operation begins when the user has selected the "About You" option of the main menu on the computers screen. The user is then prompted to indicate whether the user is an existing user or a new user. The user then provides the users identification and continues with a registration step. Some or all of the operations shown in Fig. 12 may be performed with verbal guidance from the toy.

Fig. 13 shows a preferred implementation of the registration step of Fig. 12 in greater detail. Typical operation begins by loading a registration data base, selecting a secret name, and then selecting and updating parameters displayed on the computers screen. When the exit option is selected the user returns to the main menu described in Fig. 10.

Fig. 14 shows a preferred implementation of the "Sing

31 Along" sub-module of Fig. 10 in greater detail. Typical operation begins with displaying a movie on the computer screen and concurrently causing all the toys 10 within communication range of the base unit to provide audio content, such as songs associated with the movie, through their speakers. The user can choose to advance to the next song or exit this module and return to the marin module, such as via keyboard entry.

Fig. 15 shows a preferred implementation of the "How To Play" and "Play" sub-modules of Fig. 10. Typical operation begins with the initialization of the desired script, described in greater details hereinbelow, minimizing the status window on the computer screen, closing the thread, and returning to the main menu. The computer continues to operate the thread responsible for the operation of the toy, and continues to concurrently display the status of the communication medium and the script on the computer screen.

Reference is now made to Figs. 16A and 16B which, taken together, form a simplified operational flow chart of one possible implementation of the opening actions of a script executed by the "Play" sub-module of Fig. 10. The implementation of Figs. 16A and 16B may be understood in conjunction with the following table of action identifiers and actions:

32 OPENING

Audio Text

Op002 Squeeze my foot please op015m "Hi! Good morning to you! Wow, what a morning! I'm Storyteller! What's your Secret Name, please? op020m Hi! Good afternoon! Wow, what an afternoon! I'm Storyteller! What's your Secret Name, please?

Op025m "Hi! Good evening! Wow, what a night. I'm Storyteller! What's your Secret Name, please? op036m O.K. From now on I'm going to call you RAINBOW. So, hi Rainbow, whaddaya know!

O.K., Rainbow, you're the boss. You choose what we do. Say: STORY, GAME or

SONG. op040m Ace, straight from outer space !

O.K., Ace, you're the boss. You choose what we do. Say: STORY, GAME or SONG.

Op045m Rainbow, well whaddaya know!

O.K., Rainbow, you're the boss. You choose what we do. Say: STORY, GAME or

SONG.

Op050m Bubble Gum, well fiddle de dum !

O.K., Bubble Gum, you're the boss. You choose what we do. Say: STORY, GAME or SONG. op060 Don't be shy. We'll start to play as soon as you decide. Please say out loud: STORY,

GAME or SONG.

Typical operation of the method of Figs. 16A and 16B begins by playing a voice file identified in the above table as op002. This is typically performed by instructing the toy to begin receiving a voice file of a specific time length. The voice file is then read from the storage unit of the computer and communicated via the radio base station to the toy control device that connects the received radio input to the toys speaker where it is output. Voice file op002 requests that the user press the microswitch located in the nose or the foot of the toy.

If the user presses the microswitch the script then continues by playing either of voice files op015m, op020m or op025m, each welcoming the user in accordance with the current time of the day, and then requests that the user pronounce his or her secret name to identify himself or herself to the system. The script then records the verbal response of the user for three seconds. The recording is performed by the computer, by sending a command to the toy to connect the toy ' s microphone to the toys radio transmitter and transmit the received audio input for three seconds. The radio communication is received by the radio base station, communicated to the computer and stored in the computer's storage unit as a file. The application software then performs speech recognition on the recorded file. The result of the speech recognition process is then returned to the script program. The script continues according to the user response by playing a personalized welcome message that corresponds to the identified secret name or another message where an identification is not successfully made. This welcome message also requests the

34 user to select between several options such as a story, a game or a song. The selection is received by recording the user verbal response and performing speech recognition. More detailed description of a simplified preferred implementation of a story, a game, and a song are provided in Figs 17A to 17E, 18A to 18G, and 19A to 19C respectively.

Figs. 17A - 17E, taken together, form a simplified operational flow chart of one possible implementation of a story script executed by the "Play" sub-module of Fig. 10. The implementation of Figs. 17A - 17E may be understood in conjunction with the following table of action identifiers and actions :

35 STORY MENU

Audio Text stml05 "Hey Ace, it looks like you like stories as much as I do. I know a great story about three very curious bunnies. stml 10 "Hey Rainbow, it looks like you like stories as much as I do. I know a great story about three very curious bunnies.

Stml l5 "Hey Bubble Gum, it looks like you like stories as much as I do. I know a great story about three very curious bunnies. stml25m A story. What a great idea! I love stories! Let's tell one together. Let's start with "Goldilocks and the Three Bears."

Stml30m Once upon a time, there was a young girl who got lost in the forest. Hungry and tired, she saw a small, cozy little house. The door was open, so she walked right in. stml35m On the kitchen table were three bowls of porridge. She walked up to one of the bowls and put a spoonful of porridge in her mouth.

Stml40m Oooh! You tell me. How was the porridge? Too Hot, Too Cold or Just Right? Go ahead, say the words: TOO HOT, TOO COLD, or JUST RIGHT stml50 (Sputtering) Too hot! That was Papa Bear's bowl. The porridge was too hot.

Stml55 (Sputtering) Too cold! That was Mama Bear's bowl. The porridge was too cold

Stml 60 Hmmm. Just right! That was Baby Bear's bowl. The porridge was just right! And Goldilocks ate it all up! stml 70 Telling stories with you makes my day! Do you want to hear another story? Say: YES or NO. stml 80 If you want to hear another story, just say YES. If you want to do something else, just say NO. stml 95 I'm going to tell you a story about three very curious little bunnies. stm205m Uh-oh! It looks like the bunnies are in a bit of trouble! Do you want to hear the rest of the Bunny story now? Say YES or NO. stm206m Remember the Bunny story? The bunnies were eating something yummy, and then they heard someone coming. Do you want to hear what happens? Say YES or NO. staι215m If you want to hear the rest of the Bunny story, say YES. If you want to do something else, say NO.

36 else, say NO. stm225 No? - OK, that's enough for now. Remember that you can play with the Funny Bunny Booklet whenever you want. Let's see, what would you like to do now?

Stm230 Would you like to play a game or hear a song now? Say GAME or SONG. stm245 Now, let's play a game or sing a song. You decide. Please - GAME or SONG.

37 Figs. 18A - 18G, taken together, form a simplified operational flow chart of one possible implementation of a game script executed by the "Play" sub-module of Fig. 10. The implementation of Figs. 18A - 18G may be understood in conjunction with the following table of action identifiers and actions:

38 GAME MENU

Audio Text gm805 Hey Ace, so you're back for more games. Great! Let's play the Jumble Story again. gm810 Hey Rainbow, so you're back for more games. Great! Let's play the Jumble Story again.

Gm815 Hey Bubble Gum, so you're back for more games. Great! Let's play the Jumble Story again.

Gm820m A game! What a great idea! I love playing games. Especially games that come out of stories.

Gm840 This game is called Jumble Story. The story is all mixed up and you're going to help me fix it.

Gm845m Listen to the sentences I say when you squeeze my nose, my hand or my foot. Then squeeze again in the right order so that the story will make sense. gm847m Here goes, Press my nose please. gm855m (sneezes) oh, sorry, (sniffles) it's o.k. now, you can press my nose.

Gm860 A woman came to the door and said she was a princess gm865m "O.k. - now squeeze my foot" gm875m "Don't worry, I won't kick. Squeeze my foot please."

Gm890 Soon after they got married and lived happily ever after gm895 One more, now squeeze my hand please. gm905m "Just a friendly squeeze shake if you please."

Gm910 . Once upon a time, a prince was looking for a princess to marry gm915 "Now try to remember what you squeezed to hear each sentence. Then squeeze my hand, my foot or press my nose in the right order to get the story right." gm921 A woman came to the door and said she was a princess gm922 Soon after they got married and lived happily ever after gm923 . Once upon a time, a prince was looking for a princess to marry gm924 If you want to play the Jumble Story, press my nose, squeeze my hand and squeeze my foot in the right order.

Gm925 The right order is HAND, NOSE then FOOT. Try it. gm926m "You did it! Super stuff! What a jumble Story player you are!" gm930m "And that's the way the story goes! Now it's not a jumbled story anymore! In fact, it's the story of the "Princess and the Pea." If you want, I can tell you the whole story

39 from beginning to end. What do you say: YES or NO?" gm932 "You played Jumble Story very well! Do you want to play a different game now? Say YES or NO." gm933 We can try this game another time. Do you want to play a different game now? Say YES or NO gm940 "OK, then, enough games for now. There's so much more to do. Should we tell a story or sing a song? Say: STORY or SONG. gm945 You tell me what to do! Go ahead. Say: STORY or SONG. gm965m This is another of my favorite games. It's called the Guessing Game. gm970 OK, let's begin. I'm thinking about something sticky. Guess - Is it A LOLLIPOP or PEANUT BUTTER? Say LOLLIPOP or PEANUT BUTTER. gm972 Guess which sticky thing I'm thinking about. A LOLLIPOP or PEANUT BUTTER gm975 That's right! I'm thinking about a lollipop It's sticky and it also has a stick.

Gm980 That's right! I'm thinking about Peanut Butter that sticks to the roof of your mouth. gm984 That was fantasticky. Let's try another. What jumps higher a RABBIT or a Bear ? Say RABBIT or BEAR. gm982 Let's see. What jumps higher - a RABBIT or a BEAR gm985m A rabbit, that's right, a rabbit jumps (SERIES OFBOINGS) with joy unless it is a toy.

Gm990 I'd like to see a bear jump but I'd hate to have it land on me. gml005 That was excellent game playing. Let's try something different. How about a story or a song now? You tell me: STORY or SONG. gm997 Choose what we shall do. Say STORY or SONG.

40 Figs. 19A - 19C, taken together, form a simplified operational flow chart of one possible implementation of a song script executed by the "Play" sub-module of Fig. 10. The implementation of Figs. 19A - 19C may be understood in conjunction with the following table of action identifiers and actions:

41 SONG MENU

Audio Text

Sng305 "In the mood for a song, Ace from outer space?. Super! Let's do the porridge song again. Come on. Sing along with me."

Sng310 "In the mood for a song, Rainbow well whaddaya know? Super! Let's do the porridge song again. Come on. Sing along with me."

Sng315 "In the mood for a song, Bubble Gum, fiddle de dum? Super! Let's do the porridge song again. Come on. Sing along with me."

Sng320 A song, a song, we're in the mood to sing a song.

Sng_prog Short "Pease Porridge"

Sng370 "Do you want me to sing the rest of the song? Just say: YES or NO.

Sng390 That song reminds me of the Goldilocks story. Remember? - Goldilocks liked her porridge JUST RIGHT!

Sng395 "I just thought of another great song. We can hear another song, play a game, or tell a story. Just say :SONG or GAME or STORY.

Sng410 + All right, We're going to do a great song now. Here goes ..." [SINGS short HEAD SNG_HAND AND SHOULDERS] sng415 What a song! What a great way to get some excercise!

Do you want to play a game or hear a story now? Say: GAME or STORY. sng425 I'm in the mood for a great game or a cool story. You decide what we do. Tell me: GAME or STORY.

42 Figs. 20A - 20C, taken together, form a simplified operational flow chart of one possible implementation of the "Bunny Short" story script of Figs. 17A - 17E and executed by the "Play"sub-module of Fig. 10. The implementation of Figs. 20A - 20C may be understood in conjunction with the following table of action identifiers and actions:

43 BUNNY SHORT

Audio text rb3005m music

Rb005m (Sighing) "Dear me," said the Hungry Woman as she looked in her cupboard. (Squeaky noise of cupboard opening). It was nearly empty, with nothing left except a jar of... You decide what was in the jar? HONEY, PEANUT BUTTER or MARSHMALLOW FLUFF? rb015 You decide what was in the jar. Say HONEY, PEANUT BUTTER or MARSHMALLOW FLUFF rb026 It was HONEY rb0301 Honey!! Sweet, delicious, sticky honey, made by bees and looooved by bears. rb0302 Peanut butter!! Icky, sticky peanut butter that sticks to the roof of your mouth. rb0303 Marshmallow fluff. Gooey, white, and sticky inside-out marshmallows that tastes great with peanut butter! rb3050m She reached up high into the cupboard for the one jar which was there. (Sound of woman stretching, reaching.), but she wasn't very careful and didn't hold it very well... the jar crashed to the floor, and broke. (Sound of glass crashing and breaking.) rb3055 And sticky Honey started spreading all over the floor. rb3060 And sticky Peanut butter started spreading all over the floor. rb3065 And sticky Marshmallow fluff started spreading all over the floor. rb090m "Now I have to clean it up before the mess gets worse, so where is my mop?" [Sounds of doors opening and closing.] Oh, yes! I lent the mop to the neighbor, Mr. Yours-Iz-Mine, who never ever returns things rb3075 She put on her going-out shoes and rushed out of the house

Then, a tiny furry head with long pointed ears, a pink nose and cotton-like tail popped up over the window sill. (Sound effect of something peeping, action.) rbl lO What do you think it was? An elephant? A mouse? or A bunny? You tell me: GIRAFFE, ELEPHANT, or BUNNY. rbl20 no... Elephants have long trunks, not long ears

Rbl25 , no... Giraffes have long necks, not long ears.

Rbl30 It was a bunny! The cutest bunny you ever did see! And the bunny's name was BunnyOne.

44 (Sniffing) There's something yummy-smelling in here."

Rbl95 Now when bunnies get excited, they start hopping up an down which is exacdy what BunnyOne started to do. rb200 Can you hop like a bunny? When I say, "BOING," hop like a bunny. Everytime I "Boing" you hop again. When you want to stop, squeeze my hand.

3-boings rb220m While BunnyOne was boinging away, another bunny came around. BunnyTwo, was even more curious than BunnyOne and immediately peeked over the window sill. "Hey, BunnyOne," BunnyTwo said rb230 Let's go in and eat it all up.

"Oh, I don't know if that's a good idea.." said BunnyOne. "We could get into trouble.".

231m music

Rb235 No sooner had BunnyOne said that , when a third pair of long ears peeked over the windowsill. Who do you think that was?

Rb245 Right you are! How did you know that! This is fun, we're telling the story together! rb3155 His name was BunnyThree! rb3160 BunnyThree looked at BunnyOne and BunnyTwo and he hopped smack in the middle of the honey And started licking away rb3165 BunnyThree looked at BunnyOne and BunnyTwo and he hopped smack in the middle of the peanut butter. And started licking away rb3170 BunnyThree looked at BunnyOne and BunnyTwo and he hopped smack in the middle of the marshmallow fluff. And started licking away rb3175 BunnyOne and BunnyTwo saw BunnyThree licking away and hopped in as well. rb2751 But even as the three bunnies were nibbling away at the honey, they heard footsteps. rb2752 But even as the three bunnies were nibbling away at the peanut butter, they heard footsteps. rb2753 But even as the three bunnies were nibbling away at the marshmallow fluff, they heard footsteps. rb280m Music

45 Figs. 21A - 2IF, taken together, form a simplified operational flow chart of one possible implementation of the "Bunny Long" story script of Figs. 17A - 17E and executed by the "Play" sub-module of Fig. 10. The implementation of Figs. 21A - 2IF may be understood in conjunction with the following table of action identifiers and actions :

46 BUNNY LONG

Audio Text rb280m (Suspenseful music) rb285 "hey Bunnies - let's go" whispered BunnyOne, who as we know was the most cautious of the bunch.

"Yeah, we're out of here" answered BynnyTwo and BunnyThree. But as they tried to get away, they saw to their dismay, that they were — stuck

Λ2901 Stuck in a honey puddle

Λ2902 Stuck in peanut butter freckle-like blobs

Λ2903 Stuck in a gooey cloud of sticky marshmallow fluff.

Rb295 "What do we do?" asked BunnyTwo? rb2961 (aside) BUBLLE GUM, don't worry, these three rabbits always manage to get away rt>2962 (aside) ACE,, don't worry, these three rabbits always manage to get away rb2963 (aside)RAINBOW, don't worry, these three rabbits always manage to get away rb297m rb300 The door opened, and in walked the Hungry Man, who had met the Hungry Woman coming back with the mop from YoursIsMines house.. rb3051 'So you mean to tell me that all we have for dinner is bread and honey rb3052 'So you mean to tell me that all we have for dinner is bread and peanut butter rb3053 'So you mean to tell me that all we have for dinner is bread and marhmallow fluff

Rb315 That's not even enough for a Rabbit?"

Which was what he said when he walked into the door and saw the three bunnies stuck to the floor. rb316m

Rb320 "Sweetie, I should have known you were kidding but you should never kid around with mc when I'm hungry. Rabbit for dinner- my favorite."

Rb330 "Hey, let's go," whispered BunnyOne.

"Yeah, we've got to get out of here," whispered BunnyTwo and Bunny Three. But when they tried to move, they found their feet firmly stuck.

Rb335 The Hungry Woman came in, she had no idea what the Hungry Man was talking about, until she saw the rabbits and said:

"(giggle) - yes dear, I was just joking. Yummy rabbits for you dinner. Why don't, you catch the rabbits while I get wood for a fire."

47 rb345 "No need to catch them," said the Hungry Man. "Those rabbits are good and stuck... right where they are. I'll go out to the garden and pick some potatoes. By the time the fire is hot, I'll be back to help you put the rabbits in the pot. And he hurried off. rb346m (Sounds of footsteps receding, door shutting.)

Rb350m "What are we going to do?" asked BunnyThree - he wasn't so brave any more.

"Let's try to jump out" said BunnyOne.

So they tried to (boing - distorted) and tried to (boing) but they couldn't budge.

Rb355m The Hungry Woman and Hungry Man came in with wood for the fire. They were whistling happily because they knew they were going to eat well. They started the fire and put on a pot of water, whistling as the fire grew hotter (whistling in the background). All this time, the rabbits stood frozen like statues.

Rb360 Can you stand as still as a statue? If you want to practice being a statue, just like the bunnies, squeeze my hand and then stand still. When you're finished being a statue, squeeze my hand again. rb370 "Right , so now you're a statue and I'll wait until you squeeze my hand."

Λ375 "Squeeze my hand before you play Statue." rb382 That was a long time to be a statue. rb385 "A little more wood and the fire will be hot enough to cook in," the Hungry Woman said to her husband, and they both went out to gather more wood.. rb386 (sound effect)

Rb390 "Did you hear that?" whispered BunnyTwo fiercely. "What oh what are we going to do?"

"Let's try to jump one more time," said BunnyOne.

Rb395m Rainbow, You know, you can help them. When you hear [BOING], hop as high as you can.

Rb400m Ace, You know, you can help them. When you hear [BOING], hop as high as you can.

Rb405m Bubble gum, You know, you can help them. When you hear [BOING], hop as high as you can.

Rb410m Sound of BOING] And up the bunnies hopped. [BOING] And again they hopped. [BOING] And again they hopped. rb4151m One more [BOING] and they were free of the puddle of honey. rb4152m One more [BOING] and they were free of the peanut butter blob. rb4153m One more [BOING] and they were free of the marshmallow fluff sticky cloud.

48 rb4201 You know why? Because as the fire grew hotter, the honey grew thinner, thin enough for the rabbits to unstick their feet. rb2402 You know why? Because as the fire grew hotter, the peanut butter grew thinner, thin enough for the rabbits to unstick their feet.

R 203 You know why? Because as the fire grew hotter, the marshmallow fluff grew thinner, thin enough for the rabbits to unstick their feet.

Rb425m One more [BOING] and they were on the window sill, and then out in the garden and scurrying away. rb426m (music) rb435m Just then, the Hungry Man and the Hungry Woman walked in the door with the wood and potatoes , singing their favorite song (Peas Porridge Hot in background)

Rb440 They walked in, just in time to see their boo hoo hoo rabbit dinner hopping out and away in the garden. rb445m As the hopped, they were singing happily (Honey on the Table in background)

49 Fig. 22 is a simplified operational flow chart of the "Theme Section" referred to in Figs. 17D, 18C, 19B, and 19C. The Theme Section presents the user with a general introduction and tutorial to the overall application.

Appendix A is a computer listing of a preferred software embodiment of the interactive toy system described hereinabove. A preferred method for implementing software elements of the interactive toy system of the present invention is now described:

1) Provide a computer capable of running the WINDOWS 95 operating system;

2 ) Compile the source code of the sections of Appendix A labeled:

* Installation Source Code

* Application Source Code

* ActiveX Source Code for Speech Recognition

* CREAPI.DLL

* CRPRO.DLL

* BASEIO.DLL

* Toy Configuration Source Code into corresponding executable files onto the computer provided in step 1 ) ;

3) Install the "Automatic Speech Recognition Software Development Kit" for WINDOWS 95 version 3.0 from Lernout & Hauspie Speech Products, Sint-Krispisnstraat 7, 8900 Leper, Belgium;

4 ) Compile the source code of the sections of Appen-

50 dix A labeled :

* Base Station Source Code

* Toy Control Device Source Code into corresponding executable files and install into the base communication unit 62 of Fig. 5 and into the toy control device 24 of Fig. 5 respectively;

5) Run the executable file corresponding to the Installation Source Code;

6) Run the executable file corresponding to the Toy Configuration Source Code;

7 ) Run the executable file corresponding to the Application Source Code;

It is appreciated that the interactive toy system shown and described herein may be operative to take into account not only time of day but also calendar information such as holidays and seasons and such as a child's birthday. For example, the toy may output special messages on the child's birthday or may generate a "tired" facial expression at night-time.

Preferably, at least some of the processing functionalities of the toy apparatus shown and described herein are provided by a general purpose or household computer, such as a PC, which communicates in any suitable manner with the toy apparatus, typically by wireless communication such as radio communication. Preferably, once the toy has been set up, the PC program containing the processing functions of the toy runs in background mode, allowing other users such as adults to use the household computer for their own purposes while the child is playing with the toy.

Preferred techniques and apparatus useful in generating

51 computerized toys are described in copending PCT application No. PCT/IL96/00157 and in copending Israel Patent Application No. 121,574 and in copending Israel Patent Application No. 121,642, the disclosures of which are incorporated herein by reference.

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

In the present specification and claims, the term "computerized creature" or "computerized living creature" is used to denote computer-controlled creatures which may be either virtual creatures existing on a computer screen or physical toy creatures which have actual, physical bodies. A creature may be either an animal or a human, and may even be otherwise, i.e. an object.

"Breathing life" into a creature is used to mean imparting life-like behavior to the creature, typically by defining at least one interaction of the creature with a natural human being, the interaction preferably including sensing, on the part of the creature, of emotions exhibited by the natural human being.

A "natural" human being refers to a God-created human which is actually alive in the traditional sense of the word rather than a virtual human, toy human, human doll, and the like.

Reference is now made to Figs. 23A and 23B, which are

52 illustrations of the development and operation of a computerized living creature in accordance with a preferred embodiment of the present invention. Fig. 23A shows a physical creature, while Fig. 23B shows a virtual creature.

As seen in Figs. 23A and 23B, a facility for teaching the development of interactive computerized constructs is provided, typically including a plurality of student workstations 310 and a teacher workstation 312, which are interconnected by a bus 314 with a teaching facility server 316 serving suitable contents to the teacher workstation 312 and the student workstations 310. Preferably, a creature life server 318 (also termed herein a "creature support server" or "creature life support server) is provided which provides student-programmed life-like functions for a creature 324 as described in detail below. Alternatively servers 316 and 318 may be incorporated in a single server. As a further alternative, multiple creature support servers 318 may be provided, each supporting one or more computerized living creatures. As a further alternative (not shown), a single central computer may be provided and the student and teacher workstations may comprise terminals which are supported by the central computer.

As seen in Fig. 23A, creature life support server 18 is preferably coupled to a computer radio interface 320 which preferably is in wireless communication with a suitable controller 322 within the computerized living creature 324, whereby the actions and responses of the computerized living creature 324 are controlled and stored as well as its internalized experiences are preferably retained and analyzed.

53 It is appreciated that the computerized living creature 324 preferably is provided, by creature life server 318, with a plurality of different anthropomorphic senses, such as hearing, vision, touch, temperature, position and preferably with composite, preferably student-programmed senses such as feelings. These senses are preferably provided by means of suitable audio, visual, tactile, thermal and position sensors associated with the computerized living creature. Additionally in accordance with a preferred embodiment of the invention, the computerized living creature 324 is endowed with a plurality of anthropomorphic modes of expression, such as speech, motion and facial expression as well as composite forms of expression such as happiness, anger, sorrow, surprise. These expression structures are achieved by the use of suitable mechanical and electromechanical drivers and are generated in accordance with student programs via creature life server 318.

Referring now to Fig. 23B, it is seen that a virtual computerized living creature 334 may be created on a display 336 of a computer 338 which may be connected to bus 314 either directly or via a network, such as the Internet. The virtual computerized living creature 334 preferably is endowed with a plurality of different anthropomorphic senses, such as hearing, vision, touch, position and preferably with composite senses such as feelings. These senses are preferably provided by associating with computer 338, a microphone 340, a camera 342, and a tactile pad or other tactile input device 344.

A speaker 346 is also preferably associated with com-

54 puter 338. A server 348 typically performs the functionalities of both teaching facility server 316 and creature life server 318 of Fig. 23A.

Additionally in accordance with a preferred embodiment of the invention, the virtual computerized living creature 334 is endowed with a plurality of anthropomorphic modes of expression, such a speech, motion and facial expression as well as composite expressions such as happiness, anger, sorrow, surprise. These are achieved by suitable conventional computer techniques.

It is a preferred feature of the present invention that the computerized living creature can be given, by suitable programming, the ability to interact with humans based on the aforementioned anthropomorphic senses and modes of expression both on the part of the computerized living creature and on the part of the human interacting therewith. Preferably, such interaction involves the composite senses and composite expressions mentioned above .

Fig. 23C is a simplified semi-pictorial semi-block diagram illustration of a system which is a variation on the systems of Figs. 23A - 23B in that a remote content server 342 is provided which serves data, programs, voice files and other contents useful in breathing life into the creature 324.

Fig. 24A is a pictorial illustration of a student programming the creature 324 (not shown), preferably using a simulation display 350 thereof. Programming is carried out by the student in interaction with the student workstation 310. Interaction may be verbal or alternatively may take place via any other suitable input device such as keyboard and mouse.

55 The command "play record", followed by speech, followed by "stop", means that the student workstation should record the speech content generated by the student after "play record", up to and not including "stop" and store the speech content in a voice file and that the creature life server 318 should instruct the creature 324 to emit the speech content stored in the voice file.

"If - then -endif", "speech recognition", "speech type", "and" and "or" are all control words or commands or programming instructions, as shown in Fig. 31.

Fig. 24B is a pictorial illustration of human, at least verbal interaction with a computerized living creature wherein the interaction was programmed by a student as described above with reference to Fig. 24A.

Figure 24C is a pictorial illustration of a creature 324 equipped with a built in video camera 342 and a video display 582 such as a liquid crystal display (LCD). The video camera provides visual inputs to the creature and via the creature and the wireless communication to the computer. The display enables the computer to present the user with more detailed information. In the drawing the display is used to present more detailed and more flexible expressions involving the eyes and eye brows. Color display enables the computer to adopt the color of the eyes to the user or subject matter.

It is a particular feature of the present invention that an educational facility is provided for training engineers and programmers to produce interactive constructs. It may be

56 appreciated that a teacher may define for a class of students an overall project, such as programming the behavior of a policeman. He can define certain general situations which may be broken down into specific events. Each event may then be assigned to a student for programming an interaction suite.

For example, the policema ' s behavior may be broken up into modules such as interaction with a victim's relative, interaction with a colleague, interaction with a boss, interaction with a complainer who is seeking to file a criminal complaint, interaction with a suspect, interaction with an accomplice, interaction with a witness. Each such interaction may have sub- modules depending on whether the crime involved is a homicide, a non-homicidal crime of violence, a crime of vice, or a crime against property. Each module or sub-module may be assigned to a different child.

Она принялась нажимать кнопки безжизненной панели, затем, опустившись на колени, в отчаянии заколотила в дверь и тут же замерла. За дверью послышалось какое-то жужжание, словно кабина была на месте. Она снова начала нажимать кнопки и снова услышала за дверью этот же звук. И вдруг Сьюзан увидела, что кнопка вызова вовсе не мертва, а просто покрыта слоем черной сажи.

0 thoughts on “Modr 1760 Module 1 Homework 607

Leave a Reply

Your email address will not be published. Required fields are marked *