Report on the Dorkbot Meeting of 6 july

The 6th Edition of Dorkbot Second Life was very very interesting!
JJ Ventrella presented Avatar Puppeteering, and Philippe Bossut told us all about the HandsFree3D project, and his work on connecting this 3D camera tracking to the puppeteering feature.
Read the Meeting Announcement for more information and bio’s of the presenters
Over 70 people showed up, so obviously there is a lot of interest for these technologies.
A number of people wanted to know how they could try it, and get involved in helping these technologies to fruition. Therefor I offered to make a puppeering client available, and organize a workshop in the near future.
To this purpose I created the group “Avatar Puppeteers”. It’s open to join for anybody who wishes to contribute to further development and uasage of Avatar Puppeteering.
Though things were kept simple using only video and text chat, the meeting was probably the most chaotic one we ever had! Anticipating on the huge interest for this topic (who doesn’t want more expressive avatars and more fluid ways to interact?) I had called in the help from a skillfull collegue organizer, but unfortunately his laptop broke down shortly before the meeting! Oh well, I now learned that you need 4 people to keep an event like this under control, and it was very nice to see Sugar Seville’s seats filled with people, and that Odyssey, home of the dorkbot place can handle this πŸ™‚
A lot was learned, connections were made, and groups were joined. Huray!

So here comes the chat transcript (heavily edited for readability). And some snapshots by Fau Ferdinand and Bea Fontani.

[13:04] Evo Szuyuan: Welcome to the 6th Dorkbot Second Life meeting!
[13:04] Evo Szuyuan: Dorkbot ( refers to a group of affiliated organizations
[13:04] Evo Szuyuan: worldwide that hold informal meetings of artists, engineers, and designers [13:04] Evo Szuyuan: working in the medium of electronic art.
[13:05] Evo Szuyuan: It’s motto is ‘people doing strange things with electricity’. For Dorkbot Second Life, this theme is slighlty modified to ‘people doing strange things with Second Life’.
[13:05] Evo Szuyuan: We want to explore the new possibilities of the metaverse and try to establish a platform for a glocal way of creating new connections and collaborations.
[13:05] Evo Szuyuan: If you’d like to be informed on future meetings and news, join the ‘dorkbot SL ectronic art meetings’ group.
[13:07] Evo Szuyuan: I’m very happy to introduce you now to the main developers of 2 very exciting projects.
[13:07] Evo Szuyuan: Ventrella Linden (JJ Ventrella in RL), who you may also know as the man behind the virtual world There, flexi prims and follow cam (and much more) will present Avatar Puppeteering.
[13:07] Evo Szuyuan: Roger Fullstop (Philippe Bossut in RL), who works as an β€œEntrepreneur in Residence” at Kapor Enterprises Inc (KEI) is going to tell us about Segalen/HandsFree3D.
[13:08] Evo Szuyuan: (please go to for their complete bio and many interesting links of what they have been up to).
[13:08] Evo Szuyuan: JJ and Roger will both do a 20 minutes presentation.
[13:08] Evo Szuyuan: After each presentation we have about 5 minutes time for questions.
[13:08] Evo Szuyuan: After both presentations we will have a general discussion
[13:09] Evo Szuyuan: so I will now give the word to Ventrella Linden!

[13:09] Ventrella Linden: Thanks Evo!
[13:09] Ventrella Linden: Hi
[13:09] Ventrella Linden: I worked at Linden Lab for 2 years.
[13:09] Ventrella Linden: I’m passionate about creating intuitive motion.
[13:10] Ventrella Linden: While I was at Linden Lab, I did 3 main things:
[13:10] Ventrella Linden: and Puppeteteering
[13:10] Ventrella Linden: FollowCam,
[13:10] Ventrella Linden: Flexies….
[13:10] Ventrella Linden: More info about puppeteering at
[13:10] Ventrella Linden: Let’s see the first movie!

Pose Movie

[13:11] Ventrella Linden: Here’s an example of manipulating a physical avatar.
[13:11] Ventrella Linden: ok..

[13:12] Ventrella Linden: Normal avatar animation uses a local coordinate system.
[13:12] Ventrella Linden: I call it the “lonely coordinate system”.
[13:12] Ventrella Linden: The goal of Puppeteering is to put avatars and users into a common space.
[13:12] Ventrella Linden: Avatar-to-avatar, and user-to-avatar manipulation is the ultimate goal.

[13:13] Ventrella Linden: The first two goals of physical avatar were:
[13:13] Ventrella Linden: (1) real-time expression
[13:13] Ventrella Linden: (1) real-time expression
[13:13] Ventrella Linden: (2) free-form posing to make animations
[13:13] Ventrella Linden: next image…
[13:14] Ventrella Linden: How many times have you wanted to just nudge your avatar to a specific pose?

[13:14] Ventrella Linden: The physical avatar takes the normal avatar skeleton add adds a few things.
[13:14] Ventrella Linden: For instance, it adds 5 “end effectors”.

[13:14] Ventrella Linden: It also uses spring physics for the “bones”.
[13:15] Ventrella Linden: The springs hold the joints together.
[13:15] Ventrella Linden: (opposite of nature)

ragdoll movie

[13:15] Ventrella Linden: This movie shows “rag doll mode”.
[13:15] Ventrella Linden: That’s basically physical avatar with gravity turned ON.
[13:16] Ventrella Linden: BTW, to see a cute example of ragdoll physics…
[13:16] Ventrella Linden: Google “Ventrella”, and “Peanut”.
[13:16] Ventrella Linden: OK – done with plug…

[13:16] Ventrella Linden: A few extra springs are added to hold the chest and pelvis regions together.
[13:17] Ventrella Linden: These form tetrahedra – one of Bucky Fuller’s favorite shapes!
[13:17] Ventrella Linden: Tensegrity – one cool concept.
[13:17] Man Michinaga: for sure, also good work on it in teammanagement
[13:17] Ventrella Linden: Evo’s friend deJong knows this.
[13:17] Ventrella Linden: Roll the next movie!

tets video

[13:17] Ventrella Linden: This move shows the internal skeleton springs – forming tetraherda.

[13:18] Ventrella Linden: The end effectors serve a specific purpose. Let me explain…

[13:19] Ventrella Linden: Standard skeletal animation has an “outward” flow:

[13:19] Ventrella Linden: Parent joints affect the locations of child joints.
[13:19] Ventrella Linden: …your basic hierarchical animation scheme.
[13:20] Ventrella Linden: next image (the inward diagram)

[13:20] Ventrella Linden: Physical avatar does the opposite. It takes the position of a joint…
[13:20] Ventrella Linden: …and calculate the rotation of its parent.
[13:20] Ventrella Linden: And so when you manipulate the avatar’s finger-tip (an end-effector)…
[13:20] Ventrella Linden: It rotates the wrist joint.
[13:20] Ventrella Linden: It’s sort of a variation of inverse kinematics – using physics.
[13:20] Ventrella Linden: Next movie please!

constraints movie

[13:21] Ventrella Linden: I also added a number of “humanoid constraints”.
[13:21] Ventrella Linden: These keep things from bending the wrong way – like knees – (ouch!)

[13:21] Ventrella Linden: Physical Avatar was made to make it easy to alter the code to…
[13:21] Ehdward Spengler: this system would be even more useful if it could be applied to objects
[13:21] Ventrella Linden: allow residents to….
[13:21] Ventrella Linden: pose each other!
[13:21] Ventrella Linden: (not suitable for teens πŸ™‚
[13:22] Ventrella Linden: next image…

[13:22] Ventrella Linden: That;s all folks!
[13:22] Ventrella Linden: Questions?
[13:22] Osprey Therian: Can a series of moves be recorded and played back, so to speak?
[13:23] Ventrella Linden: hmmm- Aura Linden..
[13:23] Ventrella Linden: worked on the part where…
[13:24] Ventrella Linden: you’l be able to save poses, etc.
[13:24] Osprey Therian: Would it be possible, in the future, to make bvh files this way?
[13:24] Ventrella Linden: I don’t know the status right now.
[13:24] Man Michinaga: We saw mouse manipulation
[13:24] Man Michinaga: but arethere going to be resources for external data input, like arduino, XML etc.
[13:25] Man Michinaga: can we take external input and apply it to this from device streams?
[13:25] Ehdward Spengler: no but you can have a free STFU card
[13:25] Aminom Marvin: STFU stands for Standardized Text Function Unwrapper. Through implimentation of STFU user experience increases 200%
[13:25] Aminom Marvin: is there anything like puppeteering, hierarchal groups, or anything for prims or sculpted prims planned that is animation related?
[13:25] Ehdward Spengler: yes, could this be applied to objects? like build your owen bone system, that would be great
[13:25] KRYSTAL Bleac: is this right and does it stay there?
[13:25] KRYSTAL Bleac: euh i saw on one of the pics the balls where making shadow on the ground…
[13:26] Ventrella Linden: yes – shadow
[13:26] KRYSTAL Bleac: those where shadows of the balls?
[13:26] Ventrella Linden: Those shadows were created by my physical avatar joints
[13:27] Ventrella Linden: they reflect pretty accurately
[13:27] Ventrella Linden: the location of the body above ground
[13:27] Shava Suntzu: Havoc/Euphoria? Will we ever have “natural” interactions with physical objects/landscape?
[13:27] Ventrella Linden: Euphoria is miles above this…and prolly too high end.
[13:27] Ayumi Cassini: is there any way we can test it ourselves?
[13:28] Political Magic: Is this technology in place now? Or is this a proposal?
[13:28] McGroarty Playfair: At present, the viewer with these features is in the public subversion, and there is one sim on the beta grid with the sim side support. You can see the features without needing to be on that sim, though.
[13:29] Ventrella Linden: Roger can explain more the current status of the code.
[13:29] Roger Fullstop: There’s a version of the “Puppeteering branch” that has been made public by Jake Linden on sldev
[13:30] Roger Fullstop: and it’s limited to local puppeteering (client side only)
[13:30] Roger Fullstop: I spent the most part of the last month cleaning up and fixing bugs
[13:31] Roger Fullstop: I logged a JIRA on this and will post a patch soon
[13:31] McGroarty Playfair: Info about the puppetteering branch – what it is, where to get it:
[13:31] Roger Fullstop: the code is open source
[13:31] Roger Fullstop: you can pick it up now
[13:32] Ventrella Linden: Open Source, y’all.
[13:32] Ayumi Cassini: is there a compiled version somewhere?
[13:32] Roger Fullstop: not that I know off
[13:32] Ayumi Cassini: 😦
[13:33] McGroarty Playfair:
[13:33] Evo Szuyuan: I’d be happy to organize a puppeteering workshop and downloads of a puppeteering client
[13:33] Evo Szuyuan: but let’s talk more about this after Roger speaks..
[13:33] Artm Udal: q: how do you actually controll the avatar?
[13:33] Artm Udal: say i wanna make a pose…
[13:33] Roger Fullstop: Ventrella implemented a “Ctrl” UI where you can select a joint and move it
[13:33] Roger Fullstop: but that’s joint by joint of course
[13:34] Roger Fullstop: that’s where the 3D camera will play a role (I hope)
[13:34] Feynt Mistral: Take it away Roger. >)
[13:34] Man Michinaga: yes – 3d cam
[13:35] Roger Fullstop: May be it’s time to start on the camera part of the talk?
[13:35] Ventrella Linden: cool.
[13:35] Evo Szuyuan: yes!
[13:35] Evo Szuyuan: Shall we start with a movie?
[13:35] Roger Fullstop: please do

[13:38] McGroarty Playfair: Philippe: Any special markers for that?
[13:38] Roger Fullstop: Nope
[13:38] McGroarty Playfair: Very cool!
[13:38] Roger Fullstop: it’s markerless mocap
[13:39] Saijanai Kuhn:
[13:40] Artm Udal: what’s a 3d camera?
[13:40] McGroarty Playfair: Artm: It’s a camera with implants.
[13:40] Roger Fullstop: the camera is a 3DV Systems camera
[13:40] Roger Fullstop: it’s a prototype
[13:40] Roger Fullstop: not commercially available yet
[13:41] Paul Bumi: expensive?
[13:41] Roger Fullstop: according to 3DV and other manufacturers, those cameras will be priced around $100
[13:41] Artm Udal: what exactly does it see?
[13:41] Roger Fullstop: give or take some…
[13:41] Saijanai Kuhn: wow that’s quite impressive, that’s for the low-end I would imiagine
[13:42] Evo Szuyuan: basically it’s video tracking artm
[13:42] Evo Szuyuan: it’s called 3d because it also sees depth
[13:42] Artm Udal: like the thing in wii?
[13:42] Feynt Mistral: Not quite.
[13:42] Artm Udal: stereo?
[13:42] Evo Szuyuan: sometimes it’s done with 2 cameras
[13:42] Roger Fullstop: correct evo
[13:42] Feynt Mistral: It’s like sonar, only with lasers.
[13:42] Roger Fullstop: no, the wii tracks an active device
[13:42] Saijanai Kuhn: wii tracks two infrared beams, not normal visual input
[13:42] Roger Fullstop: this camera records an RGB and an IR video
[13:43] Artm Udal: what does this camera tracks? markers?
[13:43] Roger Fullstop: the IR is used to computed the depth channel
[13:43] Feynt Mistral: Infrared light is emitted by the LEDs around the camera, and they bounce off of the person.
[13:43] Anecdote Allen: it is like a simple motion capture system
[13:43] McGroarty Playfair: It pulses IR and looks at the intensity of what’s reflected back?
[13:43] Feynt Mistral: The light then bounces back to the camera, and it reads the intensity of the light.
[13:43] Roger Fullstop: Feynt: yes, that’s what this camera (3DV) does
[13:43] Artm Udal: ah i see
[13:44] Feynt Mistral: Light would be brightest off of surfaces that face the camera.
[13:44] Roger Fullstop: they use what they call TOF (Time Of Flight) tech
[13:44] Saijanai Kuhn: could make it more accurate by using 2 or more different frequencies
[13:44] Feynt Mistral: So in effect, it’s sonar, only with light.
[13:44] Artm Udal: nice. how heavy is it?
[13:44] Ventrella Linden: So, how you light the subject matter, right?
[13:44] Neo692 Beck: very impressive….
[13:44] Saijanai Kuhn: normal visual uses normal lighting. THe IR would be self-lighting
[13:44] Roger Fullstop: the camera includes the lighting Ventrella
[13:45] Ventrella Linden: Ah. cool.
[13:45] Ventrella Linden: straight-on lighting.
[13:45] Saijanai Kuhn: so it self-lights the RGB also?
[13:45] spinster Voom: does it need a lot of bandwidth?
[13:46] KRYSTAL Bleac: spinster Voom: nope you just puch key by using a program on your pc
[13:45] Neo692 Beck: yes Saijanai
[13:45] Roger Fullstop: note that other manufacturers of 3D (depth) cams are coming with alternative technologies
[13:45] Feynt Mistral: It could theoretically be foiled by having a lot of IR light from lighting in the room.
[13:45] McGroarty Playfair: Camera with z-buffer makes me excited and fidgety 3
[13:45] Feynt Mistral: Like bay windows.
[13:45] Artm Udal: we use the stereo one. but its depth is very noisy
[13:46] Roger Fullstop: Feynt: they have some ways to eliminate the ambient IR
[13:46] Feynt Mistral: Oh neat.
[13:46] Feynt Mistral: Probably a frequency deal.
[13:46] Roger Fullstop: but with my tests, indeed, glass panels behind the camera create lots of noise problem
[13:46] McGroarty Playfair: Possibly record frames with and without the local IR light, and compare frames.
[13:46] Evo Szuyuan: shall we go on with the next movie now that we know what a 3d cam is ?
[13:46] Roger Fullstop: sure, please do

[13:49] Artm Udal: the fist is to distinguish left and right hand?
[13:50] Roger Fullstop: Artm: sort of
[13:50] Artm Udal: coz at some point he had left hand to the right of the right πŸ™‚
[13:50] Fau Ferdinand: it would be great to get out of the chair
[13:50] KRYSTAL Bleac: it looks fun but i don’t know if its easyer then a mouse to build with…
[13:50] Feynt Mistral: Wouldn’t it make more sense to implement a system that makes more use of your hand gestures?
[13:51] Feynt Mistral: Like, moving your hands apart to scale, or moving in a circular motion to rotate?
[13:51] Paul Bumi: I like to move things fast and precisely when building.. this looks fun to do but anything but fast and precise
[13:51] Roger Fullstop: agree but my point was to show that I could do fine control of the hand tracking
[13:51] KRYSTAL Bleac: ok
[13:51] Roger Fullstop: indeed, a *real* implementation would require to develop a specific interface
[13:52] Roger Fullstop: one that takes advantage of having your *hands* on the object
[13:52] Saijanai Kuhn: tactile feedbck is a ways off, however… πŸ˜‰
[13:52] Saijanai Kuhn: 3D camera with tactile feedback…
[13:52] Feynt Mistral: Nuts.
[13:53] Tickled Pink thinks of fingers on iphone screen being adapted for an SL input device. hmmmm.
[13:53] paulie Femto: Novint makes a haptic controller (the Falcon) which is available today and doesnt cost a fortune.
[13:53] Man Michinaga: I am very much for the possibility of readily available full haptic worlds
[13:54] Osprey Therian: /I’m always heartened to see more projects that increase our possibilities. I
think we don’t quite understand how constrained we are atm.
[13:53] Ventrella Linden: In my opinion,
[13:53] Ventrella Linden: the whole virtual worlds movement will benefit from this more kinematic input.
[13:53] Ventrella Linden: Better for your health.
[13:53] Roger Fullstop: Ventrella: agree with you
[13:53] Roger Fullstop: this is why I started toying with puppeteering 2 months ago
[13:54] Roger Fullstop: I have a version with puppeteering with the cam
[13:54] Roger Fullstop: but I need to fix some scaling issues (scale per bone)
[13:52] Artm Udal: does the camera sees like two spots for hands? or can it distinguish fingers?
[13:53] Artm Udal: you could chat in sign language then
[13:53] McGroarty Playfair: Shot for making gang signs. I was only trying to build a torus. 😦
[13:53] Feynt Mistral: Yes, on that note, Ventrella can you talk to someone about giving us finger control for sign language?
[13:54] Saijanai Kuhn: There was a demo a couple of years ago of taking standard sign language notation and converting it to animation.
[13:54] Saijanai Kuhn: you could do the opposite. Take text input and convert it to sign language
[13:54] Ventrella Linden: Roger and I discussed briedly…
[13:54] Ventrella Linden: finger and hands.
[13:54] Ventrella Linden: Physical avatar adds a few joints.
[13:55] Ventrella Linden: It could add more, and then…
[13:55] Ventrella Linden: you’d have hand skeletons.
[13:55] Ventrella Linden: (not that it’ll be easy, but…)
[13:54] Evo Szuyuan: Roger.. can you tell us about the reaction of the people who tried it?
[13:55] Roger Fullstop: Evo: everytime I have people using it, they love the feeling of doing something with their whole body
[13:55] Roger Fullstop: it’s much more engaging
[13:55] Roger Fullstop: I need to allow doing this sitting though
[13:56] Roger Fullstop: as lots of people asked for it
[13:56] Roger Fullstop: for the moment, my tracking SDK works better in a standing position though
[13:57] Aymeric Yalin: how about puppetering jaws and cheeks, and have the face move as we speak, or smile ?
[13:57] McGroarty Playfair: Cool thing about middle mouse button for push to talk is you can still pick up the mouse and gesticulate.
[13:58] Roger Fullstop: Anyway, I’m just starting for the moment, there’s loads of problems to solve for sure (hands, gaze, etc…)
[13:58] Hypatia Callisto: its exciting πŸ˜€
[13:58] McGroarty Playfair: Gaze would be wonderful
[13:58] Man Michinaga: for sure
[13:58] Roger Fullstop: Aymeric: face animation is really exciting and I’ve looked at a couple of technos for this
[13:59] spinster Voom: unintentional movements could be a problem? fidgeting etc?
[13:59] Roger Fullstop: It looks like depth won’t help much though
[13:59] Hypatia Callisto: my world for facial animation
[13:59] Roger Fullstop: the RGB stream will be better
[13:59] Roger Fullstop: and there’s tons of literature on the subject
[13:59] Saijanai Kuhn: yeah, google MPEG BAP OR FAP
[13:59] McGroarty Playfair: Roger: Please please do push for a developer release, not just holding out until this is all done before release.
[14:00] Roger Fullstop: McGroarty: I hear you but, right now, I’s linking statically with cameras SDKs that are very GPL unfriendly
[14:00] Roger Fullstop: I need to solve that before releasing something
[14:00] Roger Fullstop: also, remember that the cameras are not yet available…
[14:01] Feynt Mistral: Not a problem really, Roger. I’m sure we could come up with our own IR cameras. >3
[14:01] Feynt Mistral: Those of us who would bother with compiling the client would probably be able
to jury rig such a device.
[14:01] Saijanai Kuhn: what does the protocol look like from client to server?
[14:01] Roger Fullstop: Saijanai: all you’ve seen so far doesn’t require any change to the server side (navigation I means and editing)
[14:02] Saijanai Kuhn: well, you’re sending something from Client to server that isn’t part of the usual stream, arent you?
[14:02] Roger Fullstop: puppeteering does but LL has some work done on that
[14:03] Ventrella Linden: more questions?
[14:03] Evo Szuyuan: yes ! many more πŸ™‚
[14:03] Roger Fullstop: shoot
[14:03] Evo Szuyuan: to go back to an earlier question of artm about puppeteering..
[14:04] Evo Szuyuan: how cpu intesive is it?
[14:04] Ventrella Linden: The viewer side shouldn’t be too bad.
[14:04] Ventrella Linden: Could be optimized.
[14:04] Artm Udal: especially when there are many avatars next to each other bumping
[14:04] Ventrella Linden: Server side is a different domain.
[14:04] Man Michinaga: right
[14:05] McGroarty Playfair: Man: At present, LL has indicated no definite plans for taking it forward. If the community does something interesting that could easily change.
[14:05] McGroarty Playfair: Man: You can subscribe to the sldev mailing list or look through the archives to see discussion on that.
[14:06] Evo Szuyuan: right now you can compile a viewer, but server side it’s only available on 1 sim on the beta grid
[14:07] Ayumi Cassini: which sim?
[14:07] Evo Szuyuan: puppeteering sim
[14:07] Osprey Therian: /I took that to mean if the community did things to get it rolling there might be specialised viewers.
[14:07] Evo Szuyuan: yes osprey..
[14:07] Osprey Therian: and we could prod them into doing the serbverside bit
[14:08] Osprey Therian forms a guerrilla ragdoll underground army
[14:07] Evo Szuyuan: if people are up to it, it would be great to see what we can do now, and streamline some thoughts
[14:08] Roger Fullstop: would be great to have the server side supported by LL indeed
[14:08] McGroarty Playfair: Ayumi: You can use the features without being on an enabled sim as well – it’s just that only you see the results.
[14:08] Saijanai Kuhn: just wondering at how it is streamed to the server (AWG territory there)
[14:09] McGroarty Playfair: Saijanai: There are just a couple extra messages. You should be able to isolate those easily.
[14:09] IntLibber Brautigan: a single pose tends to consume three emails of data
[14:10] Saijanai Kuhn: KK. You know we’re going http for almost everything. Seems that http might be
more reliable but not sure if its suitable for such a continuous stream of data
[14:05] paulie Femto: questions on the camera: who developed it? Did you guys build it yourselves? What’s the resolution and would improving the res add anything?
[14:05] Roger Fullstop: Femto: there are a bunch of camera manufacturers. I currently developed my
code to support 2 of them
[14:06] Roger Fullstop: (I got prototypes for)
[14:06] Roger Fullstop: The resolution of the camera is 640×480 (the proto, the commercial ones are supposed to go higher)
[14:07] Saijanai Kuhn: I think you’d get wider initial adoption with a keyboard interface (computer and/or MIDI)
[14:08] Ventrella Linden: any other Qs?
[14:08] Evo Szuyuan: what would it take to implement script functions for the puppeteering?
[14:09] Roger Fullstop: Haven’t thought about that at all
[14:09] Ventrella Linden: That was one of our original ideas – and I was never able to implement that
– but
[14:09] Ventrella Linden: it would be really empowering.
[14:09] Roger Fullstop: I’m puzzled. Example of what you have in mind Ventrlla?
[14:10] Ventrella Linden: So for instance,
[14:10] Ventrella Linden: instead of the mouse cursor moving joitns in Physical Avatar,
[14:10] Ventrella Linden: we design scripts that allow LSL to move them-
[14:10] Ventrella Linden: using various high-level commands.
[14:11] Feynt Mistral: I honoustly don’t see what the issue would be if you treated the end effectors
like attachments, allowing you to set their positions.
[14:11] Feynt Mistral: llMoveJoint([LEFT_WRIST, + llGetPos]);
[14:11] Hypatia Callisto salivates over lsl control
[14:11] Ventrella Linden: Physical Avatar code doesn’t care what’s moving its joitns.
[14:11] Roger Fullstop: easier than the current animation files… interesting…
[14:11] Evo Szuyuan: and say you want to combine this with motion like walking ?
[14:11] Ventrella Linden: Conbine with walking…
[14:11] Ventrella Linden: once of the more complex things about this is that…
[14:11] Ventrella Linden: it tried to make traditional ava animation get along with…
[14:12] Ventrella Linden: physically-based, forward dynamics.
[14:12] Ventrella Linden: Not easy.
[14:12] Ventrella Linden: But do-able.
[14:10] Saijanai Kuhn: the BAP/FAP rotocol is pretty compressed and looks to implement the entire
system you’ve shown Wondering if that should be standardized on
[14:12] Saijanai Kuhn: facial animation in MPEG-4
[14:14] Saijanai Kuhn: body animation for MPEG-4
[14:13] Evo Szuyuan: roger.. what kind of usage do you see for your technology?
[14:13] Roger Fullstop: Evo: first, we’d like to make meetings in SL more natural, with gesture and body language becoming part of it
[14:14] Rober1236 Jua: well Cutlure produces gestures
[14:14] Roger Fullstop: that would improve the attractivness and efficiency of meetings in world
[14:14] Rober1236 Jua: gestures that are automatic that we don’t have to think about
[14:14] Ventrella Linden: I’m not looking at the audiesnce. For example.
[14:14] Rober1236 Jua: you would want to animate these gestures not have to run them
[14:14] Ventrella Linden: Not enough body language to make it worth the low framerate.
[14:15] Roger Fullstop: exactly, right now, meeting is IM with a nice backgrounf
[14:15] Rober1236 Jua: and VOIP
[14:15] Rober1236 Jua: and file exchanges and email and other things
[14:15] Evo Szuyuan: true
[14:15] Roger Fullstop: we don’t really take advantage of the 3D aspect of the world, i.e., relating to others in a 3D space
[14:15] Ventrella Linden: Roger…
[14:16] Ventrella Linden: does the puppeteering work correctly for you – in “local mode”>
[14:16] Roger Fullstop: err… after some debugging, yes πŸ™‚
[14:16] Ventrella Linden: I recall the server interpretation causing joints to be way off.
[14:17] Roger Fullstop: at the beginning, I crashed after one puppeteering session πŸ™‚
[14:17] Ventrella Linden: I’m hoping that it is still looking right in local mode.
[14:17] Roger Fullstop: Ventralla: the issue of joint being way off happened also in local mode
[14:17] Ventrella Linden: curious why.
[14:18] Roger Fullstop: that had to do with the init (getting the copy of the joint position) from LLVOAvatar not being done at the right time
[14:18] Roger Fullstop: under some conditions
[14:18] Ventrella Linden: yea – that sounds right.
[14:18] Ventrella Linden: LLVOAvatar – ah – memories.
[14:18] Roger Fullstop: Yeah, I had to track the whole flag state logic and rewrite it from scratch basically
[14:19] Roger Fullstop: now it works πŸ™‚
[14:19] Ventrella Linden: you rule.
[14:19] Roger Fullstop: I also added rotational puppeteering
[14:19] Roger Fullstop: i.e.: you can’t turn the gaze of the head right now
[14:19] Roger Fullstop: or rotate the hands
[14:19] Ventrella Linden: Good job.
[14:20] Roger Fullstop: so I added and extra modifer to do just that
[14:20] Ventrella Linden: Any other questions from the gallery?
[14:20] Evo Szuyuan: there was a queestion earlier about other avatars then humanoids
[14:20] Ventrella Linden: Ah…
[14:20] Ventrella Linden: A fave topic.
[14:20] paulie Femto: πŸ™‚
[14:20] Ventrella Linden: uh, what’s the question?
[14:21] Evo Szuyuan: could the humanoid limit be bypassed?
[14:21] paulie Femto: imagine puppeteering a 4 legged avatar. πŸ™‚
[14:21] Ventrella Linden: With enough coding, of course πŸ™‚
[14:21] Humming Pera: or leave the coding more open
[14:21] Ventrella Linden: If this were my own virtual world…
[14:21] Ventrella Linden: we’d all be open-ended animals to begin with,.
[14:21] Evo Szuyuan: other joints need to be known
[14:22] Ventrella Linden: Yes – here’s the thing…
[14:22] Humming Pera: open-ended would be much better
[14:22] Ventrella Linden: When you have a known skeleton, it’s much simpler.
[14:22] Ventrella Linden: when you want an open-ended skeleton, you need more procedural umph – like Spore.
[14:22] Saijanai Kuhn: MPEG-4 defines a generic humanoid, and skips to generic bone structure. Nothing inbetween
[14:23] Saijanai Kuhn: seems to me that optimzed quadrapeds and so on could be devised
[14:23] Ventrella Linden: Absolutely.
[14:23] Humming Pera: even speaking in these terms like skelton implies biological life, which this isn’t …
[14:23] Saijanai Kuhn: most furries are humanoid or four-leggged as far as I know
[14:23] Ventrella Linden: I developed an animal skeleton while at LL, but…
[14:23] Malburns Writer: a fish avie?
[14:23] Ventrella Linden: Like so many innovations like that,
[14:23] Ventrella Linden: they took a back seat to stability.
[14:23] Osprey Therian: 😦
[14:23] paulie Femto: 😦
[14:23] Evo Szuyuan: ahh.. so we can start nagging for that code too πŸ™‚
[14:23] Darien Caldwell: lol
[14:24] Saijanai Kuhn: that was the result of the OPen Letter caling for freezing innovation til all the bugs were worked out
[14:24] Ventrella Linden: I can’t speak to those decisions – not my call.
[14:25] Hypatia Callisto: I remember Runitai having worked on sculpties being able to adopt the skeleton of the avatar as well
[14:25] Hypatia Callisto: havent heard anything about that recently
{14:21] Political Magic: why is 3d video better than having sensors on your body?
[14:22] Roger Fullstop: political: few people like to put on markers…
[14:23] Political Magic: Is it true that using markers is much easier and more reliable than 3d video?
[14:24] Saijanai Kuhn: Political, it would have to be, but not many people want to wear little metalic dots on their face and clothing
[14:24] Roger Fullstop: political magic: it’s more reliable and precise so that’s what movie studio do, but it’s not easier
[14:25] Roger Fullstop: especially if you’re the one who need to put the markers on πŸ™‚
[14:25] Malburns Writer: i always imagined multiple sensor (4-8) in rl environment and wearing small
badge to calibrate self
[14:25] McGroarty Playfair: I’ve done mocap. For full body, you spend more time maintaining the markers and recalibrating than you spend shooting.
[14:25] Roger Fullstop: personally, I can’t imagine putting markers on my face going to work πŸ™‚
[14:25] Ventrella Linden: MArkers on glasses – a little easier.
[14:26] Roger Fullstop: yeah but that’s just a couple of points
[14:26] Osprey Therian: Need implants πŸ˜€
[14:26] Saijanai Kuhn: Johnny Lee showed using Wii input converted to goggles on youtube. Very cool. I invited him to this presentation but I dout he got the email
[14:26] Ventrella Linden: Yea – so real face MOCAP should be done optically.
[14:26] Ventrella Linden: with cameras.
[14:26] Roger Fullstop: I loved Johnny Lee Wii videos!
[14:27] Saijanai Kuhn:
[14:27] Roger Fullstop: The first demo I did was a copy of his “point of view tracking” demo but without the Wii
[14:27] McGroarty Playfair: Oooh nice
[14:27] Political Magic: How adaptable is the Wii technology to SL?
[14:28] Political Magic: And is this something to focus on?
[14:28] Roger Fullstop: I heard of some folks trying to adapt the wiimote
[14:28] Roger Fullstop: after all, there’s a joystick driver for it
[14:28] Bingo Onomatopoeia: I did that…
[14:28] Bingo Onomatopoeia: …moving with the nunchuk is nice
[14:28] Saijanai Kuhn: we need a generic plugin architecture for avatar control
[14:29] Saijanai Kuhn: needs to allow for mouse-like AND full pupeteering control
[14:29] Ventrella Linden: multiple inputs.
[14:29] Saijanai Kuhn: Ventrella, are those packets separate or part of some existing message packet?
[14:30] Ventrella Linden: what packets?
[14:30] Saijanai Kuhn: I put in the suggestion to map fundamental animation primitives to hotkeys or
chorded keys or even mdid inpu
[14:30] Ventrella Linden: (packet question forst..)
[14:30] Saijanai Kuhn: MIDI*
[14:30] Man Michinaga: yes.
[14:30] Man Michinaga: MIDI INPUT|!
[14:31] j3rry Paine: yayyyy sai
[14:31] Man Michinaga: That’s what SL NEEDS|!|
[14:31] paulie Femto: imagine playin a keyboard or instrumen to make yer av dance. πŸ™‚
[14:31] Ventrella Linden: sorry – there was another question…?
[14:31] Saijanai Kuhn: of course we need standardized MIDI instruments but that doesn’t exist for LInux afaik
[14:31] Tara Yeats: MIDI input would be very cool for puppeteering
[14:31] Oz Larg: Animusic πŸ˜€
[14:31] Osprey Therian: singing or humming hah
[14:32] Humming Pera: :-))
[14:32] Saijanai Kuhn: Ventralla, the packets that are sent for the pupetteering on the server side
[14:32] Ventrella Linden: sorry – what about the packets?
[14:33] Saijanai Kuhn: ok, let’s say yo do a control of hte avai’s joints. Is that cached and sent as a big chunk of BVH, or as a stram of separate modifications, or what
[14:33] Ventrella Linden: Ah…
[14:34] Ventrella Linden: Well, before Roger has his take on this one…
[14:34] Saijanai Kuhn: OK, thought canned animations were cached on client side. DIdn’t realize they were streamed already.
[14:34] Ventrella Linden: The goal is to not have to send too much stuff over whever you changce a
[14:34] Ventrella Linden: So, it’s NOT a BVH.
[14:34] Ventrella Linden: It was originally just the one joint…
[14:34] Ventrella Linden: and each viewer knew how to run the physics (well, kind of).
[14:35] Ventrella Linden: Then it was changed to all the joints sent over…
[14:35] Ventrella Linden: But the details were known by Cube Linden well, and now,
[14:35] Ventrella Linden: I think Roger knows that parts best. Roger?
[14:35] Roger Fullstop: Actually, I haven’t played with the roundtripping through server yet
[14:36] Roger Fullstop: I’ve seen and read the code though
[14:36] Roger Fullstop: and, indeed, it’s still the way you just described
[14:36] Evo Szuyuan: ah
[14:36] Saijanai Kuhn: OK, I’m the protocol documentation guy for the AWG so I get into this stuff. Sorry if i’m boring anyone
[14:36] Roger Fullstop: i.
[14:36] Roger Fullstop: I’ll be diving into the protocol part later this month
[14:37] Roger Fullstop: for the moment, I’m finishing the local tracking part
[14:37] Saijanai Kuhn: Should give me a holler if you need an assistant
[14:37] Roger Fullstop: Saijanai: cool! ping me on sldev
[14:37] Saijanai Kuhn: the eventual goal is to document all current protocols as a baseline for redesigning them for the open grid
[14:39] Saijanai Kuhn: was looking more intothe Body Animation Parameters (BAP) protocol. They have a pretty extensible system. NOt sure if its what is required for SL or not
[14:40] Saijanai Kuhn: shameless plug. If any technogeeks want to help design the SL 2.0 protocols, IM me for AW Groupies membership
[14:40] Saijanai Kuhn:
[14:41] Saijanai Kuhn: don’t tell ZEro I said SL “2.0” though
[14:41] Roger Fullstop thinks he should join that group
[14:41] Evo Szuyuan: the thing i wonder is how to proceed with puppeteering if we can’t look at server side of things..
[14:41] Evo Szuyuan: as long as it all works no problem ofcourse..
[14:41] Saijanai Kuhn: EVo as long as you now what the protocols are supposed to do, it shouldn’t matter TOO much
[14:41] Ventrella Linden: One idea is this…
[14:41] Ventrella Linden: if it’s just viewer-side,
[14:41] McGroarty Playfair: Evo: You could pass stuff encoded in chat as an interim fix.
[14:41] Rober1236 Jua: Yah work for Microsoft for a while you get good at it
[14:42] Evo Szuyuan: hehehehe
[14:42] Rober1236 Jua: Im serious
[14:42] Ventrella Linden: it can be used to save poses. – to make animations. That’s what Aura was
(is?) working on.
[14:42] Evo Szuyuan: aha!
[14:42] Hypatia Callisto: that’s fantastic, and pretty much like what we could do way back when in a
little place I was once in (rose)
[14:42] Roger Fullstop: That code is still in there though I haven’t tried it yet
[14:43] Evo Szuyuan: that’s the internal poser right?
[14:43] Roger Fullstop: yeap
[14:43] Evo Szuyuan: thanks saij
[14:44] Man Michinaga: what woudl be really cool is to map sensor input to nodes for puppeteeringof
av IK nodes or object control
[14:44] Roger Fullstop: the thing with puppeteering is that you need to track much more than a couple of points for realistic movements
[14:44] Political Magic: What is the difference between puppeteering and just animated bots in SL?
[14:44] Malburns Writer: animation=effects puppeteering=function ???
[14:44] Evo Szuyuan: animation is pre recorded
[14:45] Ventrella Linden: Yes – what Evo sez.
[14:45] Political Magic: OK, it’s real time, but that is waht an avatar does….how is puppeteering
[14:45] Ventrella Linden: As Evo was saying…
[14:45] Ventrella Linden: Normal animation ios pre-recorded.
[14:46] Ventrella Linden: Think of it as a movie being playes.
[14:46] Ventrella Linden: Then you stop the movie and
[14:46] Ventrella Linden: grab control of the skeleton – that’s puppeteering.
[14:46] Ventrella Linden: When you’re done, it falls back into normal mode.
[14:46] Ventrella Linden: That’s an oversimplification, ‘
[14:47] Ventrella Linden: but gets the basic idea, I think.
[14:48] Ventrella Linden: Well,
[14:48] Ventrella Linden: RL is calling me.
[14:48] Osprey Therian: Thank you very much, Ventrella.
[14:48] McGroarty Playfair: Best luck on current endeavors!
[14:49] Rober1236 Jua: Wondering, how we use our body is a function of the social context we are in,
does in not make sense to just create a vocublary of understood gestures for the context of Sl rather
than “puppeting” from RL?
[14:49] Malburns Writer: Yes – thanks all – highly interesting
[14:49] Evo Szuyuan: Yes JJ and Philippe !
[14:49] Evo Szuyuan: I’ll post the chat transcript on the dorkbot blog
[14:49] spinster Voom: oh good evo, think i need to reread it at least a couple of times lol
[14:49] Saijanai Kuhn: Tree Kyomoon has a nice wikifier for chat logs. Adds SL avatar pages and so on
[14:49] Rober1236 Jua: thank you
[14:49] Hypatia Callisto: thank you very much, this was a great presentation
[14:50] Evo Szuyuan: many many thanks for sharing this with us and being here
[14:50] Ventrella Linden: Roger – great work –
[14:50] Roger Fullstop: that was a pleasure to meet with you
[14:50] Ventrella Linden: glad you’re giving the code some TLC.
[14:50] Tara Yeats: excellent session!
[14:50] Ventrella Linden: ok – later all…
[14:51] Roger Fullstop: I’m going to stay around for another 10 minutes then sign off
[14:51] Osprey Therian: for upcoming event info where should we go Evo?
[14:51] Evo Szuyuan: i’d love to try the camera!
[14:51] Evo Szuyuan: please join dorkbot group
[14:51] Maxxo Klaar: thank you all – byeo
[14:51] Osprey Therian: is it open to join?
[14:52] Evo Szuyuan: yes..
[14:52] Evo Szuyuan: dorkbot SL ectronic art meetings
[14:52] Evo Szuyuan: Fau..
[14:52] Evo Szuyuan: what would you do if you had the 3d cam ?
[14:52] Fau Ferdinand: ah
[14:53] Fau Ferdinand: I’d pretend I’m looking for my head on the floor
[14:53] Bingo Onomatopoeia: I woould build a 3d sound-editor or synth
[14:53] Evo Szuyuan: πŸ˜€
[14:53] Roger Fullstop: there’s more to 3D cams than just puppeteering
[14:54] Evo Szuyuan: yes ofcourse!
[14:54] Evo Szuyuan: i would love to see face animation too
[14:54] Roger Fullstop: face animation needs to be done with mesh blending
[14:55] Roger Fullstop: in SL
[14:55] Roger Fullstop: I’d love to have someone working on this with me
[14:55] Evo Szuyuan: are you doing everythin by yourself now?
[14:55] Roger Fullstop: yeap, just me and Mitch… but he doesn’t code much… πŸ˜€
[14:55] Evo Szuyuan: haha.. no wonder you’re lead programmer πŸ™‚
[14:55] Evo Szuyuan: just kidding
[14:55] Saijanai Kuhn: Talk to Ina Centaur. She dricts the SL Shakseparean company and I kow they’ve been doing work with lipsynched mechanima. MIght be some programmers who could contribute
[14:55] Roger Fullstop: a leader of a team of *1* πŸ˜€
[14:55] Evo Szuyuan: I was hoping Mm Alder would be here
[14:55] Evo Szuyuan: he implemented lipsync for SL
[14:55] Saijanai Kuhn: this is exactly what they’ve ben hoping for, body and facial animation
[14:56] Evo Szuyuan: Ina is using crazy talk
[14:56] Saijanai Kuhn: ah, OK
[14:56] Roger Fullstop: I heard about that lipsync patch but haven’t had time to play with it
[14:56] Saijanai Kuhn: but I know they’ve talked aobut pupetteering hoping it would be available
[14:56] Evo Szuyuan: there has been a lot of talk on bringin more expression to avatars on the machinima group
[14:57] Oz Larg: Back in the way old days we had lip syncronization in the Traveler platform
[14:57] Saijanai Kuhn got a hippo nomination for typing too much
[14:57] Roger Fullstop: Those guys should love puppeteering + 3D cam πŸ™‚
[14:57] Evo Szuyuan: i found a link of discussion notes of 2006 on puppeteering
[14:57] Evo Szuyuan: they thought it would make SL nr 1 for machinima
[14:57] Saijanai Kuhn: well, I’m sure it would
[14:58] Evo Szuyuan: i think a lot of the concern was making the UI more comlicated
[14:58] Roger Fullstop: well, the cam should make the UI much more simple…
[14:58] Evo Szuyuan: absolutely!
[14:59] Roger Fullstop: that’s the goal at least
[14:59] Malburns Writer: i’m using lipsynch and love it – but needs wider range of gestures still
[14:59] Roger Fullstop: tantalizing…
[14:59] Oz Larg: Malburns: is that trans?
[14:59] Saijanai Kuhn: I heard that it only trigers on volume right now, not phonemes
[14:59] Malburns Writer: trans?
[15:00] Oz Larg: phonemes were used in the Traveler platform
[15:00] Oz Larg: built into the vocodec
[15:00] Saijanai Kuhn: when sound comes over the microphone, the louder the sound, the bigger the
mouth. Phonemes would use AI to shape the mouth differently for different sounds
[14:58] Evo Szuyuan: true
[14:59] Evo Szuyuan: right now you can only get information on the volume
[14:59] Evo Szuyuan: and to do phoneme detection you need an audio stream per avatar
[15:00] Evo Szuyuan: Vivox will come with new function for developers that hopefully will enable this
[14:59] Saijanai Kuhn wonders about non-english phonemes though
[15:01] Oz Larg: A lot of this phonemes stuff was done in the old day, I can hook you up to the people who designed it
[15:01] Saijanai Kuhn: trying to see if LInux users have MIDI playback in SL via QT movies. If they do, then you have a universal MIDI instruments format available. COuld use it to send long streams of animation synched with music
[15:02] Saijanai Kuhn is thinking an official VAG for pupeteering would be a good thing. GIve in an official page, tie-in with AWG/AW Groupies, etc
[15:02] Saijanai Kuhn:
[14:59] Rober1236 Jua: Animations are like words in the body language, animations will always be
needed like charcters
[15:00] Rober1236 Jua: you might pupper it but you will want to send certain animations rather than
having to act out long complex gestures
[15:00] Evo Szuyuan: maybe robert..
[15:00] Evo Szuyuan: but for example..
[15:01] Rober1236 Jua: Gesture only matters for meaning
[15:01] Evo Szuyuan: we’;re working on a movie right now..
[15:01] Evo Szuyuan: and there are no subtle anims in SL
[15:01] Evo Szuyuan: there is riding a horse
[15:01] Rober1236 Jua: well SL anims are primitive that is true
[15:01] Evo Szuyuan: but not ‘goin up the horse;
[15:02] Evo Szuyuan: also synchronization between avi’s is difficult
[15:02] Rober1236 Jua: the thing about gestures is they communicate
[15:02] Rober1236 Jua: so you can keyboard or type them
[15:02] Rober1236 Jua: you don’t scratch your head to scratch your head but to show you are wondering
[15:02] Rober1236 Jua: gestures that have no intention are like picking your nose
[15:02] Evo Szuyuan: if you have joint information you can have more fluid interaction between avatars
[15:02] Rober1236 Jua: that is true
[15:02] Rober1236 Jua: but the communication is selecting gestures
[15:03] Rober1236 Jua: that have meaning you wish to communicate
[15:03] Rober1236 Jua: a keyboard is very interesting
[15:03] Rober1236 Jua: or maybe some kind of plastic substance like a clay
[15:03] Roger Fullstop: Robert: I’m using gesture to communicate in RL but not in SL ’cause babysitting the avatar for it is too much work
[15:03] Rober1236 Jua: Well the goal of realism is that gesture is RL will be intentional
[15:04] Rober1236 Jua: so though puppeting might accomplish it the goal is communication not motion
for motion;s sake
[15:04] Rober1236 Jua: in fact I would like to move in ways I could not puppet
[15:04] Roger Fullstop: ? gestures in RL are intentional
[15:04] Rober1236 Jua: well some like picking our nose or rear are not
[15:05] Rober1236 Jua: but for the most part how we use our bodies in public is learned and has social intention
[15:05] Rober1236 Jua: gestures like scrathing or petting things are often considered rude
[15:05] Roger Fullstop: what about projecting that social intention with no or little UI?
[15:05] Rober1236 Jua: well that is not very much fun
[15:06] Rober1236 Jua: I can just text that
[15:06] Evo Szuyuan picks her nose
[15:06] Rober1236 Jua looks bored with a world without gesture
[15:06] Rober1236 Jua snickers at someone picking thier nose
[15:06] Rober1236 Jua: but you just made a metagesture
[15:06] Roger Fullstop: right now, I’m not looking at anyone in SL, just the stream of text in the IM window
[15:06] Rober1236 Jua: you made a gesture to reference a gesture which means its a message
[15:06] Roger Fullstop: all the comm is there
[15:07] Rober1236 Jua: Yes but it is dull
[15:07] Roger Fullstop: what’s the plus of being in SL is everything is entered and rendered as text?
[15:07] Rober1236 Jua: it lacks the shades and colours that a full body can bring to communication
[15:07] Evo Szuyuan: i think you can have both
[15:07] Rober1236 Jua: We are all in agreement on this, I just want to make clear that most linguists view gesture as communication
[15:07] Roger Fullstop: I think we need both
[15:07] Evo Szuyuan: i wouldn’t want everything to be just like RL here
[15:08] Rober1236 Jua: communication and not fiedelity of body movement must be the objective
[15:08] Roger Fullstop: gesture is comm
[15:08] Evo Szuyuan: but i certainly want more freedom of expression
[15:08] Rober1236 Jua: Same here, I want to do back fligs when I am happy
[15:08] Roger Fullstop: that’s why you want it in SL, with little or no extra “texting” to transfer it
[15:08] Rober1236 Jua: I want to jump over the moon with joy
[15:08] Rober1236 Jua: but what coding, what words, and how is it expressed?
[15:09] Evo Szuyuan: but when someone is talking i also want to know they are talking..
[15:09] Rober1236 Jua: Yes and how they use their arms and feet
[15:09] Evo Szuyuan: and right now that’s very difficult
[15:09] Rober1236 Jua: and I want to also fill my hair with wind when I am being important
[15:09] Roger Fullstop: indeed, all that communicate
[15:09] Rober1236 Jua: right now we don’t know how to do it
[15:09] Rober1236 Jua: or maybe Google does
[15:09] Malburns Writer: maybe voice recognition software could someday help animation avatar
[15:09] Rober1236 Jua: but AI has not come far in Semantics
[15:10] Rober1236 Jua: well I would point to the Sociologist Maus and say we learn to use our body to
[15:10] Evo Szuyuan: i think a lot of communication is in subtle body language
[15:10] Rober1236 Jua: in IT we give people tools and they lean to use it
[15:10] Saijanai Kuhn: human readable languages are difficult to program in
[15:10] Rober1236 Jua: well I think probably the richer tool concept it better
[15:10] Evo Szuyuan: not just in language
[15:10] Rober1236 Jua: let people have ore control over av
[15:11] Saijanai Kuhn: the rulesowrk just fine for reading, but when trying to write NEW stuff in a human-like language, it gets kinda silly (like AppleScript)
[15:11] Roger Fullstop: sure, that’s the idea
[15:11] Roger Fullstop: the cam is not all or nothing
[15:11] Evo Szuyuan: exaclty
[15:11] Roger Fullstop: it’s an extra input device
[15:11] Evo Szuyuan: it doesn not exclude anything else
[15:11] Roger Fullstop: why should we be limited to keyboard and mice?
[15:12] Evo Szuyuan: to get RSI ?
[15:12] Rober1236 Jua: well for accessability we should make almost everything utlimately keyboard and mice
[15:12] Rober1236 Jua: actually keyboard
[15:12] Rober1236 Jua: but we can add other devices that make it richer
[15:12] Evo Szuyuan: like 3d cam ?
[15:12] Evo Szuyuan: πŸ˜€
[15:12] Roger Fullstop: some people can’t use keyboards…
[15:13] Rober1236 Jua: 4-D cam, with ability to go back sounds great
[15:13] Evo Szuyuan:
[15:13] Evo Szuyuan: these were the 2006 notes on puppeteering..
[15:13] Evo Szuyuan: for those interested
[15:13] Rober1236 Jua: excellent
[15:13] Roger Fullstop checks that
[15:14] Evo Szuyuan: if i could puppetter other avatars i would put bingo straight πŸ˜‰
[15:14] Evo Szuyuan: he’s been hanging there forever!
[15:14] Saijanai Kuhn: BTW, you probably missed it in the spam, but someone developed a text-to- american sign web app
[15:15] Saijanai Kuhn:
[15:15] Bingo Onomatopoeia: ehheh
[15:15] Bingo Onomatopoeia: funny, didnt notice that πŸ™‚
[15:15] Saijanai Kuhn: doenst’ work on my Mac, but the interface looks cool
[15:15] Evo Szuyuan: interesting
[15:16] Saijanai Kuhn: seems like you could have primitive AI looking fo rkey words or phrases and do
custom gestures instead of the ASL
[15:16] Roger Fullstop: ok guys, I think RL (in the shape of my 8 year old daughter) is calling me
[15:16] Evo Szuyuan: ok..



July 7, 2008 at 9:23 pm 3 comments

Dorkbot Session Announcement

The upcoming Dorkbot session will take place 6 July 1:00 pm PDT / 22:00 CET at the Odyssey Simulator!

This dorkbot session will be of interest to developers, performers, machinimators, animators, photographers, and just about anybody with an interest to bring more expression to avatars and more direct ways to interface with our embodied selves.

JJ Ventrella (a.k.a. Ventrella Linden) will present the avatar puppeteering project. A project motivated to bring more expression to the avatar, by enabling a more fluid and direct way to manipulate the ‘physical avatar’. This is a Linden Lab project that unfortunetely was ‘put to sleep’ as Linden Lab decided to focus on the stability of Second Life and internal opinions differed over how compelling the feature really is.
Last month however, Linden Lab released the client source code of this project, to allow other people to build on it (and convince them otherwise).
Philippe Bossut, will present Handsfree3D, a project that enables to interface with virtual worlds via 3D camera motion tracking. Philippe Bossut has developed several demos to demonstrate how one can navigate and execute all sorts of tasks without the need for a mouse and keyboard, and over the last month he has been working on connecting his code with the puppeteering feature.

IM Evo Szuyuan for questions (or technical issues you might experience during the event).

JJ Ventrella

JJ Ventrella is a programmer-artist who specializes in Virtual Worlds and Artificial Life. After three college degrees (Virginia Commonwealth University, Syracuse University, and the MIT Media Lab), he moved to San Francisco to do simulation-based game design at Rocket Science Games. He then joined Will Harvey in founding, where he was Principle Inventor for 5 years and invented many of the avatar communication technologies. He also created the There Dog, vehicle physics, and many other technology/designs. After a short stint at Adobe developing JavaScripts for Acrobat3D, Ventrella spent two years at Linden Lab, where he invented FollowCam, Flexies, and Puppeteering. In his own time, Ventrella has been rolling out versions of his artificial life game GenePool, and publishing papers about interactive animation and artificial life. Currently Ventrella is doing the final touches on the NASA Images home page, at the Internet Archive. Soon he will head up to Vancouver to teach a Virtual Worlds class at the Masters of Digital Media, and to start writing his first book. You can see many of Ventrella’s creations at

Philippe Bossut

Philippe Bossut (Roger Fullstop in Second Life) is a software engineer who made all of his career in Computer Graphics and Desktop applications development. He currently works as an “Entrepreneur in Residence” at Kapor Enterprises Inc (KEI) where he leads the Segalen project. Prior to Segalen, he worked 3 years as an engineering manager at OSAF (Open Source Application Foundation) and, before that, hold various management positions at Macromedia and Microsoft. Back in the 90’s, he was one of the early developers of Live Picture, an award winning image compositing application for the Mac that helped popularize resolution independent non destructive image editing. That entrepreneurial adventure took him away from his native France and brought him to California 14 years ago (though, considering his heavy accented English, you could swear he just landed…). He holds a PhD in Computer Graphics from Ecole des Mines de Paris and an engineering degree in Geology from Ecole des Mines de Saint Etienne. He also has a long passion for Archaeology and published some papers on the subject. His personal blog can be read at :

Segalen / HandsFree3D
Segalen is an attempt to bring dramatic user experience improvement to Virtual Worlds in general and Second Life in particular. The initial momentum was given by the emergence of “3D” cameras from a handful of vendors, based on a variety of technologies. Though none of those cameras were yet available to the public, Mitch Kapor decided to fund a small experimental project with Philippe Bossut to explore their capabilities. The project started in January 2008 with some camera prototypes and SDKs. In May, Segalen posted its first video demo, rapidly followed by a second one. The first one was to show that the camera and feature extraction were fast enough to completely replace the use of keyboard for in world navigation. The second focused on precision tracking, showing that precise click and drag as required in object editing for instance was achievable. Those videos received a very wide coverage in the blogsphere and the traditional press. After this initial success, the project zeroed on the true objective: combining digital puppeteering with real time motion capture. Most of the development since May has been to debug and rearchitecture the “puppeteering branch” of the open source slviewer. This work is now complete (see JIRA VWR 7703) and the project if now refocused on plugging the camera feature tracking input into the viewer.

Philippe Bossut (Roger Fullstop), San Francisco (Second Life)

July 3, 2008 at 2:38 pm 4 comments

Dorkbot Session Announcement

The upcoming Dorkbot session is a collaboration between Dorkbot SL and Dorkbot Paris, and will take shape as a Mixed Reality Event, in Paris and Second Life.
The Dorkbot Session will take place 21 March 12:30 pm PDT / 20:30 CET and start in Second Life at the Odyssey Simulator!
In Paris it will be at La Cantine Numerique.
151 rue Montmartre, dans le Passage des Panoramas, 75002 Paris
MΓ©tro Grands Boulevards

During the session we’ll be teleporting to other locations so please IM Evo Szuyuan if you don’t find anyone there.

From Paris Maxime Marion will be presenting the records project, and Nicolas Barrial from extralab will present La Cantine Virtuelle (La Cantine in Second Life). From Second Life Berardo Carboni will present the Volavola project, and Second Front will perform Minotaure.

Live Video streaming service donated by (Secondlife avatar Eifachfilm Vacirca)

Maxime Marion – Records

Numeric artist and musician Maxime Marion will present his musicometer project which allows to generate music from algorithmes.

“Records” is an online project which is a electronic music net-label. the names of the bands, the albums and the songs are
automaticaly generated by computer software.


Second Front, Minotaure Haledol Dreams

Second Front is the pioneering performance art group in the online
avatar-based VR world, Second Life. Founded in 2006, Second Front
quickly grew to its current 10 member troupe that includes: Wirxli
Flimflam aka Jeremy Owen Turner, Tea Chenille aka Tanya Skuce, Man
Michinaga aka Patrick Lichty, Alise Iborg aka Penny Leong Browne, Tran
Spire aka Doug Jarvis, Great Escape aka Scott, Lizsolo Mathilde aka
Liz Pickard, Gazira Babeli aka CLASSIFIED, Bibbe Oh aka Bibbe Hansen
and Fau Ferdinand aka Yael Gilks.
Taking their influences from numerous sources, including Dada, Fluxus,
Futurist Syntesi, the Situationist International and contemporary
performance artists like Laurie Anderson and Guillermo Gomez-Pena,
Second Front creates theatres of the absurd that challenge notions of
virtual embodiment, online performance and the formation of virtual
narrative. Created in 2006, they have already performed extensively,
including in Vancouver, Chicago, New York, and has been featured in
publications including SLate, Eikon, Realtime Arts (Australia), The
Avastar (published by Axel-Springer, Germany) and in Exibart (Italy).

Second Front will perform the piece “Minotaure Haledol Dreams”, a bull
fight in a padded room, a mental hospital scene.


Berardo Carboni – Fly me / Volavola

Berardo Carboni (1975) has taken a degree in law, he is scenario writer and director, and he has made several shorts and documentary films that have been chosen for national and international festivals.
He has worked with Lara Favaretto to the making of videos for the art circuit; they have been awarded prizes inside important environments (Furla Prize, Venice, 2001; P.S. 1 studio program, Muma, New York, 2002). Together with Lara Favaretto he also worked in 2003 at his first full-length film, Buco nell’acqua, a TV-movie with Sandra Milo, produced by Mediatrade.
Currently he is working on the project “Fly me / Volavola”

Fly Me / Volavola – The Movie
Volavola is about human beings, emotions becoming inevitably dull, the want not to surrender to habit always striving for more, and that love in need to be kept alive and preserved from its natural decay. There are two films in Volavola, both of them with the same screenplay. However, the first film will be shot in 3-D computer graphics, also exploring the virtual worlds of Second Life. The plot, as it unravels, specifically aims to disorient the viewer when it becomes increasingly difficult to understand what is actually real and what is not. Real avatars and real people, interpreting themselves, plus fictional
characters, both computer generated and in flesh and blood, are all playing different roles throughout the entire picture. Finally, Volavola also deals in a sociological way, sometimes even surreal, with themes such as hacker ethics or the values shared in a virtual society, topics which are too often left out by science-fiction in general. The main message is to defend those values, as they play an important part in our cultural heritage by helping us to build a better place to live for everyone. The entire film will be set in Rome as the actual city and as a virtual one where it is now possible to meet characters coming from different virtual universes. The two films will be mirroring each-other. Although the pictures will be distributed through different medias.

Berardo Carboni who previously shot Shooting Silvio, wrote the script in collaboration whith the writer, Mario Gerosa author of the book Virtual Words and Second life, currently on its third reprint, chief-editor of “AD, architectural digest italia-” magazine and professor of cultural planning of the territory and comunication at the “Politecnico di Milano”. Berardo Carboni. is also working on the direction of both parts of the production.

La Cantine

Nicolas Barrial – Basics of β€œbuilding in Second Life”

Nicolas Barrial (Nick Rhodes in SL) build the virtual representation of La Cantine in Second Life.

La Cantine is a creative, experimental and innovative space dedicated to digital technologies, located in the center of Paris.

Both a place of exchange and a technological showcase, La Cantine is open to professionals, tech enthusiasts and everyday users. This place is dedicated to ascending innovation welcome all contributors to digital life. La Cantine favors the conception and emergence of new uses, products and services by accelerating ideas, creation and networked innovation.

A true collaborative platform, La Cantine is a new space for sharing experience and skills (co-working, art-oriented platforms, alternative venues, competitive clusters, specialized research labs, colleges and universities).

This desire to share and find new ways to collaborate made it natural to extend its activity to virtual worlds. One of the main partner of La Cantine being Orange, Orange Island sounded like the perfect place for implementation of the virtual Cantine.

At “La Cantine Virtuelle”, you will attend mixed reality events with partners in Paris but also find content of innovative projects.

March 17, 2008 at 1:09 pm 4 comments

Dorkbot Session Announcement

The 4th Dorkbot Session SL will take place 28 Oktober 2 pm PDT / 22:00 CET and start at the Odyssey Simulator!
We’ll be teleporting to other locations so please IM Evo Szuyuan if you don’t find anyone there.

Featured speakers are Nnoiz Papp (Tobias Becker) and Juria Yoshikawa (Lance Shields), two very versatile artists. In this session we will focus on virtual installation art. Nnoiz Papp will present his latest works, emphasizing the aural aspects of his installations, while Juria Yoshikawa will present an overview of the work she realized over the last year, concentrating on kinetic objects and light sources.

Juria Yoshikawa

Juria Yoshikawa ((RL name: Lance Shields)

– Juria’s sl artist blog:
– Also see photos of her work at:
– Visit her art sim at:

After two decades of creative pursuits – ranging from conceptual art, installation, poetry, performance, computer art, animation, photography and digital design – Juria Yoshikawa (a.k.a. Lance Shields, Tokyo-based new media artist and designer) arrived in Second Life in the winter of 2007 looking for a new artistic spark. Rather than bringing in rl artwork, Juria is compelled to use mainly the elements that make up sl itself. A typical Juria Yoshikawa virtual artwork mixes kinetic objects, light sources, animated texture, ambient noise and av animations. She inevitably chooses scales larger than conventional gallery work because she is interested in people experiencing the work in a physical way – flying through them, riding on them and socializing within the art. To Juria virtual art is about freeing oneself up to create in ways she finds impossible in real life.

In my real life, I am Lance Shields, a new media artist, creative director, interactive designer and social media specialist. Graduating with a BFA in new media from San Francisco Art Institute in 1992, sculpture and installation are where I started my rl creative career but I became progressively more involved in the digital and interactive. I see Second Life as a return back to my artistic roots yet at the same time combines my newer interests in the phenomenology in the virtual world. Phenomenology is defined as β€œthe reflective study of the essence of consciousness as experienced from the first-person point of view”. In my commercial life, I am a social media strategist a global company embrace social networking, blogging and Second Life.

Some of Juria’s live works

Cosmic Kisses Illuminated
To Liquid Light
Crystal Choreography
Adrift in Veiled Tongues
Ugly Contradictions
Garden Electric

Recent Installations by Juria Yoshikawa

“Liquid Light” and “Aurora Room”, Princeton University, on exhibit now
To Liquid Light
To Aurora Room

“Tower of Ugly Contradiction”, Burning Life, on exhibit now

“Garden of Memespelunk”, Personal art lab, in-progress / open to public
Garden of Memespelunk

“Garden Electric”, IBM, permanent exhibit
Garden Electric

“Blink – Two Rooms and an Island”, Velvet Underground, August 2007

“Exhibition 4”, White Cube Gallery, June 2007

“I’m Not Here”, Gaping Lotus Experience, May 2007

“Land of Forgotten Light”, Bluffs Center for the Arts, May 2007

Garden of Memespelunk – creative lab & collected works of Juria Yoshikawa.
Garden of Memespelunk

All are welcome to my newly created art garden located on a remote island. Feel free to come and explore my new and past installations, kinetic light art and sound experiments. This is an on-going project so it will be changing frequently. I recommend viewing with sun on midnight and sound volume on (preferably with headphones.) I love feedback of any kind so please IM me your thoughts. Happy memespelunking!

Nnoiz Papp

Nnoiz Papp (RL name: Tobias Becker)

scoremusic-composer / music for animated movies / sound-designer / keyboarder /oboist / 3D / video-installations

1980-1986 Musikhochschule KΓΆln (studying music for teaching in schools, oboe and (jazz-)piano)

since 1985 over 250 musicproductions for german childrenΒ΄s TV (Sendung mit der Maus)

since 1982 working as an studiomuscian with oboe (for example Klaus Schulze) and keyboards (since 1984 with computers) in many different styles (from pop to heavy metal with U.D.O. and AXXIS)

10 CDΒ΄s productions (4 under the pseudonym “SVENSSON” – electronic & oboe, 4 archivmusic cdΒ΄s for selected sound, Koch-music-library and sonoton)

live music with TRIOGLYZERIN, a trio thats playing live with old silent movies in cinemas (

since 1996 different internet activities (quicktime vr – flash – 3d) see the links below….

2006 live-video-installation at “Wuppertaler BΓΌhnen”, visualizing NymanΒ΄s opera “The man who mistook his wife with a hat”
with VJ software

In SL since end of may 2007, trying to put the things together…..

Some of Nnoiz’s live works


Internet (oboe&electronic) (acoustic&electronic lounge sound) in german…sorry…english version coming soon…. (dark ambient)

October 26, 2007 at 6:53 pm 1 comment

selfreport 3rd dorkbot second life session (17 june 2007), chat, pics, recording

Ok, I’m hardly objective, but the 3rd edition of Dorkbot Second Life, was again a great success!

There were 3 artists presenting, who all work in Real and Second Life, but compliant with the Dorkbot SL motto: “People doing strange things with Second Life”, this session was about their SL work.

The session started with Miulew Takahe (BjΓΆrn Eriksson in RL) and Bingo Onomatopoeia presenting the Avatar Orchestra Metaverse (AOM). Miulew explained the ideas and mechanisms behind AOM, illustrated with pictures and sounds. Bingo then demonstrated the instruments, and emphasized the role of visualization in the work. The lag during his presentation made clear just what one of the major challenges of an avatar orchestra is, and though unwanted, I consider it as an essential part of the presentation.

For the second part of the session the mic was handed to Adam Ramona (Adam Nash). He gave a short intro on his work, after which everybody teleported to his latest installation A Rose Heard At Dusk. People seemed a bit hesitating to go, because everybody was having a lot of questions for Adam, and eager to hear about his ideas. But once assured there was plenty of room for questions during the demo, we all went. This installation is definitely one that would be hard and very expensive to create in Real Life.

The experience was intoxicating and some people stayed in for over an hour, and it actually ended up in a party with everybody dancing, chatting, showing of particle systems and smokers corner πŸ˜‰

Below the images is a chat transcript of the session. Miulew recorded the sound of Adams installation and it can be heard at his podcast. (people unfamiliar with second life, please notice this is a live recording and therefor includes your typical avator noise like typing and flying against the ceiling).
For more info (artist biographies, links etc.) please check out the dorkbot session announcement.

dorkbot 3 intro Bingo’s instrument Dorkbot 3 snapshot

Adam RamonaA Rose Heard At Duskparty

middle top photo by Spinster Voom
right bottom photo by Evo Szuyuan
all other photos by Maximillian Nakamura

chat transcript 1st part (intro + AOM)
Maximillian Nakamura: Hello everybody my name is Maximillian Nakamura and I am the founder dorkbot SL. I would like to introduce first Evo Szuyuan, who is now organizing the dorkbot sessions with me.
Maximillian Nakamura: we are just a few but lets begin.
Maximillian Nakamura: Dorkbot refers to a group of affiliated organizations worldwide that hold meetings of artists, engineers, and designers working in the medium of electronic art. Started by Douglas Repetto in New York (2000).
Maximillian Nakamura: The motto of the first life dorkbot sessions are:
Maximillian Nakamura: People doing strange thing with electricity.
Maximillian Nakamura: It seems that in SL we need to get a little bit other perspective so our motto is:
Maximillian Nakamura: People doing strange thing with SL.
Maximillian Nakamura: πŸ™‚
Maximillian Nakamura: We are writing a blog about our activities here:
Maximillian Nakamura: OKAY
Maximillian Nakamura: Now I would like to hand over to Evo who is introducing the speakers of this session.
Maximillian Nakamura: welcome again everybody
You: k
You: First Bingo Onomatopoeia and Miulew Takahe will present The Avatar Orchestra Metaverse (AOM),
You: an international new-music collective based in Second Life.
You: Then Adam Ramona will present his latest work
You: A Rose heard At Dusk
You: an interactive participatory audiovisual sculpture for Second Life.
You: First Miulew will explain some of the ideas behind AOM.
You: After his presentation he can take a couple of questions.
You: After that we’ll pass the word to Bingo, who will demonstrate some of the instruments.
Maximillian Nakamura: yep dorkbot is always interactive πŸ™‚
You: we will have a about 5 minutes for a Q&A about AOM, or any questions you would like to ask Bingo or Miulew,
You: Before we move on to Adam Ramona.
You: Adam will first give an introduction to his latest work, A Rose Heard At Dusk
You: Then we will all visit the installation, and get the full experience!
Maximillian Nakamura: you find all infos at our blog also:
You: During the demo you are free to ask any questions you might have.
You: πŸ˜‰
You: So I would now like to introduce Miulew Takahe. Also known as BjΓΆrn Eriksson
You: Miulew is an improvising musician and has worked as electronic engineer and sound engineer
You: He also lectures in sound and radio production.
You: for AOM he writes music, conducts and leads rehearsals within Second Life.
You: Miulew! you go!
Miulew Takahe: πŸ˜‰
Miulew Takahe: Thank you Evo, and Dorkbot for having AOM here
Miulew Takahe: was very last minute, but it’s a real pleaseure
Miulew Takahe: Avatar Orchestra Metaverse – or AOM as i now will call it – is a new orchstra
Miulew Takahe: only existed since february
Miulew Takahe: actually i got involved in AOM
Miulew Takahe: after the first dorkbot meeting here in SL
Miulew Takahe: then maximillina introduced me to the work of a piece called FADHEIT
Miulew Takahe: he had made a soundHUD for this piece and was looking for players for the piece
Miulew Takahe: so he had started the AOM
Miulew Takahe: and it was continually expanding
Miulew Takahe: you have to correct me on this, if i am wrong maximillian
Maximillian Nakamura: nope πŸ™‚
Miulew Takahe: we can show slide no2
Miulew Takahe: ok… here we see the soundHUD for the FADHEIT piece
Miulew Takahe: the idea to start an avatar orchestra was getting to maximillian partly by his playing with the RL orchestra Laptoporchestra Berlin
Miulew Takahe:
Miulew Takahe: the sounds from this piece should be heard now if you press play
Bingo Onomatopoeia: ahaha, I hear VM13 πŸ™‚
Miulew Takahe: we ump in 3 minutes into the piece
Bingo Onomatopoeia: now…
Maximillian Nakamura: now it is the right one
Maximillian Nakamura: please all press play on ur music player
Miulew Takahe: we where about 5 or 6 players on this piece
Miulew Takahe: and the piece is constructed of different violin sounds
Maximillian Nakamura: it was a solopiece before and then we played it with laptoporchestra πŸ™‚
Maximillian Nakamura: then with avatar orchestra!
Miulew Takahe: you can show slide no1
Miulew Takahe: this is us performing FADHEIT
Miulew Takahe: as you see – it is hard to see if we play some instruments
Miulew Takahe: so this was the next step we wanted to explore
Miulew Takahe: to have instruments that also where visible to an audience
Miulew Takahe: so . with the help of HarS Hefferman (i think you are here)
Miulew Takahe: we continued with the Vickys Mosquitos project
Miulew Takahe: Hars (or Harols Schellinx in RL) hav run this project for at least two years now
Miulew Takahe: and to this date 14 different performances has been done in the Vickys Mosquitos
Miulew Takahe: we was planning the 13th as a virtual SL one
Miulew Takahe: for this we wanted to only use inWorld sounds
Miulew Takahe: sounds we had found here in SL
Miulew Takahe: and also we wanted to have instruments that were showing some activity
Miulew Takahe: slide no3 please
Miulew Takahe: so for this bingo invented the aviophone
Miulew Takahe: the missile like thing you see beside the avatars
Miulew Takahe:
Bingo Onomatopoeia: dont call it MISSILE!!!
Miulew Takahe: the url is for further reading on the VM project
Bingo Onomatopoeia: LIPSTICK!! or even DILDO!!!!
Bingo Onomatopoeia: LOL!
Miulew Takahe: the sounds we used sounded a bit like this
Miulew Takahe: there were sounds from casinos, water, winds, birds etc.
Miulew Takahe: clicks
Miulew Takahe: type sounds
Miulew Takahe: the vicky mosquitos always starts with a vicky voice
Miulew Takahe: so we had to do that also ere in SL
Miulew Takahe: so this is the vicky voice reading the vickys mosquitos story
Maximillian Nakamura: press play on ur music player please
Miulew Takahe: i jump in a little further
Miulew Takahe: the piece was also made in collaboration with HarS and his duo ookoi
Miulew Takahe: that were physically at De Waag festival in Amsterdam
Miulew Takahe: and so AOM was screened on the walls there
Miulew Takahe: doing performances
Miulew Takahe:
Miulew Takahe: in the blog post you can read further on this
Maximillian Nakamura: press play on ur music player please
Miulew Takahe: actually we have started to really like to do mixed reality shows
Miulew Takahe: so if we can have a place where there are real audience listening and hearing us – we appreciate it
Miulew Takahe: ok – – only inworld sounds were used
Miulew Takahe: next slide please no4
Miulew Takahe: the next invention which bingo came up – he will soon show some of this too
Miulew Takahe: was the Sinus HUDs for the piece Rue Blanche
Miulew Takahe: this is a piece that explores the natural sine tone series + found sounds
Miulew Takahe: these sounds were used
Miulew Takahe: an iothers
Miulew Takahe: well i hope you get the idea
Miulew Takahe: slide 5 please
Maximillian Nakamura: yep
Miulew Takahe: this is how these huds looks from avatar perspective
Miulew Takahe: the Rue Blanche is a conducted piece
Miulew Takahe: where the avatars play different sounds and listen to each other
Miulew Takahe: it is very much like playing with an orchestra but in a new environment
Miulew Takahe: we have to carefully consider different positions to audience etcetera
Miulew Takahe: the rue blanche sounded a bit like this
Miulew Takahe: nearly all of our pieces can be listened to from diff. blogs and podcasts – you find them all at
Miulew Takahe: also we have some videos up there
Maximillian Nakamura: there is also
Miulew Takahe: next slide please
Miulew Takahe: this is bingos onomatophone – which he soon will show
Miulew Takahe: next slide 07
Miulew Takahe: it was made for the piece Wee.Nee.Kresh that has a special crashy history
Miulew Takahe: i wont go into that now
Miulew Takahe: these two pieces were premiered on 12th may on virtual Haidplatz
Miulew Takahe: next slide
Miulew Takahe: 08
Miulew Takahe: the virtaul Haidplatz is a mockup of the real square in Regensburg, south germany – with the same name
Miulew Takahe: the media artist group Pomodoro Bolzano builded this in parallel with RL happenings in may in Regensburg
Miulew Takahe:
Miulew Takahe: and during the 12th may performance also we had some live screening ion Sweden at Eskilstuna university
Miulew Takahe: next slide09
Miulew Takahe: ok.. jump to 10 direct i think
Miulew Takahe: that is the real square in Regensburg
Miulew Takahe: in late june we will be doing some more meshed and mixed reality performances from v.Haidplatz
Miulew Takahe: it will be the preparal meetings for the Pages Exhibition in London in fall
Miulew Takahe: more info oon this later
Miulew Takahe: ok.. i am finished here
Miulew Takahe: thank you! πŸ˜‰
Maximillian Nakamura: great! thank you miulew!
Miulew Takahe: the last slide is from Eskilstuna!
You: does anyone have a querstion for miulew at this point
Miulew Takahe: it is me to the most right
Juria Yoshikawa: good looking fella
Maximillian Nakamura: πŸ™‚
Miulew Takahe: thank you juria! πŸ˜‰
Maximillian Nakamura: any questions?
You: if there are no questions let’s give the word to bingo
Miulew Takahe: i forgot to mntion that we are about 15 avatars playing in the orchestra
Fluxe Fitzgerald: isn’t it difficult to make music with latency?
Bingo Onomatopoeia: ok, there’s not much left for me to tell πŸ™‚
Juria Yoshikawa: why an orchetr
Miulew Takahe: yes, very
Juria Yoshikawa: orchestra?
Juria Yoshikawa: why not solo?
Miulew Takahe: but we take that as a creative challenge
Fluxe Fitzgerald: is there a drive to improve the realism?
Miulew Takahe: it is nearly always much more fun to collaborate, than doing solos, juria! πŸ˜‰
Miulew Takahe: no special drive for that, fluxe
Maximillian Nakamura: Fluxe Fitzgerald asked: is there a drive to improve the realism?
Miulew Takahe: maybe the opposite
Hardwarehacker Hoch: the sound was streamd RL or live from SL, sorry dump question yout 2 get it clear
Miulew Takahe: to use the specialties of SL
Miulew Takahe: mabe bingo has some sayings on this..
Miulew Takahe: the sounds here – was streamed
Bingo Onomatopoeia: I think it is best to demonstrate…
Miulew Takahe: but at performance we play from inside SL – no streamed sounds
Hardwarehacker Hoch: nor now, in the performance
Miulew Takahe: at performance only inWorld sounds
Fluxe Fitzgerald: i can imagine performing in a group where the latency was substantial but predictable. is this a legitimate goal?
Bingo Onomatopoeia: no playback-shows πŸ™‚
Miulew Takahe: except the voice of Vicky! πŸ˜‰
Maximillian Nakamura: so these sounds are coming from bingo
Bingo Onomatopoeia: arrrgh, lagging hard…
Bingo Onomatopoeia: the pint is: you get stereo-sound from SL
Bingo Onomatopoeia: and when the orchestra mixes with the audience…
Sugar Seville: if anyone is wearing a mystitool or other scripted gadget, it would help to turn it off
Bingo Onomatopoeia: …everybody gets a different mix
Bingo Onomatopoeia: and there are no “bad seats” anymore
Hardwarehacker Hoch: lag could be a artistic way
Bingo Onomatopoeia: the “microphoe in SL is attached to the camera
Maximillian Nakamura: intermezzo info:
Bingo Onomatopoeia: so we dont have real influence on what a single listener hears
Yvette Tzara: they had a discussion about lag in the electric cool aid acid test
Bingo Onomatopoeia: we accepted that and made it a part of the permonce
Yvette Tzara: and that everything we do is a delay
Yvette Tzara: in that it is a reaction
Yvette Tzara: and that the key
Bingo Onomatopoeia: ok, but how to make people belive they re not listening to a stream?
Yvette Tzara: was to anticipate
Yvette Tzara: at the same time
Bingo Onomatopoeia: we had to visualize
Yvette Tzara: and eliminate lag
Hardwarehacker Hoch: and when ther is voice in sl we have maybee realy strane delay sounds πŸ˜‰
Bingo Onomatopoeia: first: a flying instrument, the aviophone
Bingo Onomatopoeia: following the player
Bingo Onomatopoeia: played by mouse-clicks
Bingo Onomatopoeia: but Maximilian’s HUD-concept was far better to play
Bingo Onomatopoeia: so I added visualisation as an attachment
Bingo Onomatopoeia: it responds to what i am doing on my HUD , which is invisible to you
Bingo Onomatopoeia: maybe, you dont hear the sounds yet, sorry, lag πŸ™‚
Miulew Takahe: receives sinus tones now, bingo! ;))
Maximillian Nakamura: a HUD is a Head up Device
Bingo Onomatopoeia: ok, I think it is a playable instrument
Bingo Onomatopoeia: and has some show-elements to it, hehe
Maximillian Nakamura:
Bingo Onomatopoeia: ok, I’m told to keep it short
Bingo Onomatopoeia: one more instrument…
Bingo Onomatopoeia: very laggy πŸ™‚
Bingo Onomatopoeia: It has six spheres, filled with samples/loop
Miulew Takahe: this is the Onomatophone!
Bingo Onomatopoeia: I can control the samples that are played
Bingo Onomatopoeia: and theitr movement
Bingo Onomatopoeia: sorry, it hardly responds to my clicks, hehehehe
Bingo Onomatopoeia: so, no stream involved, really!
Sugar Seville: great sounds Bingo!
Bingo Onomatopoeia: we have prepared our presentations in parrallel πŸ™‚
Yvette Tzara: very cool
Bingo Onomatopoeia: miulew said 90% i had prepared – read it on our blog πŸ™‚
Miulew Takahe: πŸ˜‰
singuart Ballinger: good jungle sound
xsinguart: good jungle sound
Eifachfilm Vacirca: very cool
Maximillian Nakamura: and everybody feel free to IM both for questions
Eifachfilm Vacirca: yeah
spinster Voom: if anyone wants to know about the late june stuff:
Maximillian Nakamura: usaually the session go one hours
Maximillian Nakamura: but today is a exception πŸ™‚
Buffy Beale: When’s the orchestra performing?
Miulew Takahe: we have next performance with AOM on the 28th june
Miulew Takahe: and then another at the 30th june
Buffy Beale: Thanks, and you each have equivalent of sheet music?
Miulew Takahe: buffy – -we improvise some too…
Miulew Takahe: or better put it – -we must play intuitively
Bingo Onomatopoeia: buffy, we have sheets but not in a concentional sense
Buffy Beale: So that’s the beauty that each piece is different but the same sorta
Miulew Takahe: listen to each other – consider lag -etcetera
Bingo Onomatopoeia: *conventional/usual
Bingo Onomatopoeia: the structure of a song is given
Bingo Onomatopoeia: but then we have several factors influencing it
Miulew Takahe: buffy: look here
Bingo Onomatopoeia: direktor/lag/how many we are etc…
Miulew Takahe: also at

chat transcript 2nd part (Adam Ramona – A Rose Heard At Dusk)
Adam Ramona: Ok, I’ll keep it short πŸ™‚
Adam Ramona: Can everybody hear me? I don’t want to use the microphone πŸ™‚
You: lol
Miulew Takahe: hear you ok! πŸ˜‰
spinster Voom: yes
Adam Ramona: Hehe. Ok, so…
Adam Ramona: I have been working in realtime 3D for about 10 years now.
Adam Ramona: I see it as an audiovisual medium.
Adam Ramona: By this I mean it is both sonic and visual at the same time,
Adam Ramona: neither more important than the other,
Adam Ramona: which is a little different from RL, perhaps.
Adam Ramona: The words I use on my website to describe the work are
Adam Ramona: primarily for those not familiar with the medium of realtime 3D,
Adam Ramona: but everybody here is familiar with it,
Adam Ramona: so perhaps its best if we all just go experience the piece,
Maximillian Nakamura: OKAY
Sugar Seville: can we have a link to your website Adam?
Miulew Takahe: before we go – can we ask some questions?
Maximillian Nakamura: yea
Sugar Seville: I made a teleporter for Adams piece, its to the left of the stage
Adam Ramona: Sure, ask away, but we can talk while in the piece.
Miulew Takahe: great — i want to ask some about the immersive sound garden, if it is ok?
Adam Ramona: go ahead
Maximillian Nakamura:
Miulew Takahe: then second – -are this place – yours alone – or do you invite sound artist to make installations there?
Adam Ramona: them’s not questions πŸ™‚
You: the immersive sound garden is called ramonia
Adam Ramona: no, all the work is mine, but I and a colleague are currently
Adam Ramona: building a gallery, wehre we will invite residencies, etc.
Miulew Takahe: ok, thanks
Adam Ramona: ok, if there’s no more questions, let’s all TP to the work.
Bingo Onomatopoeia: adam, are u musician in RL?
Adam Ramona: Yes, I am a musician and sound artist in RL, and SL. Is there any difference really ? πŸ™‚
Maximillian Nakamura: I made a landmark giver of the immersive garden
Sugar Seville: RL is just a video game right?
Adam Ramona: Feel free to TP to the work now
Maximillian Nakamura: anybody whos is interested can click on the red ball and get a landmark
Miulew Takahe: LOL
Adam Ramona: It is designed for a few avatars, so it will be intersting to see how it rsponds to a larger group.
Gazira Babeli: great
Adam Ramona: Teleporter’s just here.
Adam Ramona: I don’t bite πŸ™‚
You: should i go first πŸ˜‰
Adam Ramona: Yes please Evo.

Maximillian Nakamura: hello everybody
You: let’s go in
Maximillian Nakamura: so this is the entrance
Miulew Takahe: /ahhh so great! ;;))
Eifachfilm Vacirca: awsome
Hars Hefferman: woopie!
Maximillian Nakamura: evrybody up?
Maximillian Nakamura: Entrance is here
You: adam…
You: do we all hear the same thing?
Adam Ramona: Please press “Stop” on your music player.
You: or is it different where i’m at?
Adam Ramona: Everybody will hear a different kind of sound based on their position within the piece
Joskie Despres: This is a really nice concept.
Adam Ramona: Flying in mouselook is quite good.
Adam Ramona: Move around to experience it.
Miulew Takahe: /we should record this! ;))
Hars Hefferman: yeah, do, Miulew ..
Miulew Takahe: /i love the voices – and the morse signals
Adam Ramona: feel free Miulew
Adam Ramona: move around to experience the piece
Joskie Despres: Is it possible to set so that the camera position moves through, rather than the av
Adam Ramona: yes, press M for “mouselook”
Adam Ramona: then use mouse to look around and W and S to move
Adam Ramona: Time to open the champagne πŸ™‚
Sabina Stenvaag: adam–i jsut LOVE yer stuff
Adam Ramona: Thanks Sabina!
You: so adam? what do you think with this amount of people in it?
Adam Ramona: It sounds quite busy, but I’m impressed that SL handles it no probs.
spinster Voom: yeah, no lag here at all!
spinster Voom: fantastic
Sabina Stenvaag: ditto–no problems here
xsinguart: the sound have the air mecanic
Adam Ramona: that’s right singuart
Miulew Takahe: adam – the voices sounds awesome – -what are these?
Adam Ramona: Voices are mainly sourced from shortwave radio and military intercepts.
Miulew Takahe: ahh great – -that was what i thought
Adam Ramona: This piece was commisioned by Telstra for their SL opening back in February.
Bingo Onomatopoeia: how many loops/samples are in this installation, adam?
Adam Ramona: Working on some interesting new stuff, would be great to be able to present it at some point in the future.
Adam Ramona: Bingo, there are about 36 in this piece.
Sugar Seville: it’s a really beutiful piece
Adam Ramona: I have a collaboration on my island that uses 256 samples.
Adam Ramona: Thanks Sugar!
Bingo Onomatopoeia: yay, thats a number…
wolfgeng Hienrichs: is it recorded or do you sequence the sound
Bingo Onomatopoeia: how do you prepare 256?
Bingo Onomatopoeia: I mean to check if they fit together?
Bingo Onomatopoeia: a sampler?
Adam Ramona: The greatest sampler of them all – the human brain πŸ™‚
You: haha
Bingo Onomatopoeia: hahaha πŸ™‚
Adam Ramona: But SL is impressive in its ability to handle this.
Bingo Onomatopoeia: yes adam, i was afraid to the last minute that my instrument woud never work
Bingo Onomatopoeia: but 256 is more than I expected
Bingo Onomatopoeia: I’m happy atm about 6 loops at once
Bingo Onomatopoeia: ant the sync is ok
You: how long will the piece stay at ponderosa adam?
Maximillian Nakamura: and if you have any question IM me please πŸ™‚ or the others
Maximillian Nakamura: πŸ™‚
Adam Ramona: NOt sure, Evo, probably not much longer, but
Adam Ramona: I have a copy at my place too.
Maximillian Nakamura: okay I just last infos:
Maximillian Nakamura: πŸ™‚
Miulew Takahe: check
Maximillian Nakamura: there is also a dorkbot group in SL
Maximillian Nakamura: please join if you want

June 19, 2007 at 5:18 pm 3 comments

Older Posts


  • Blogroll

  • Feeds