passer.life
Artificial Life
HumanoidControl.com / RoboidControl.com
Tracking technology
34 posts
34 followers
156 following
Regular Contributor
Conversation Starter
comment in response to
post
If I was (still) working at a University this would be a great starting point for a paper. Now, being an independent development, I need to find a way to turn this into new work...
comment in response to
post
The rest (cohesion factor, alignment factor, separation factor and separation distance) are all weights in the network and can therefore be trained...
comment in response to
post
- cohesion is just the sum of the relative positions of the neighbours;
- alignment is the average of the relative velocities;
- separation is the sum of relative position with the magnitude = 1/v where v is the original magnitude.
(The last one is still a bit too complex: WIP)
comment in response to
post
Solution is to add a Collider to the avatar so that the robot can detect it. This is actually (surprisingly) difficult to achieve using the GLTF model format.
comment in response to
post
An interesting fact is that the Unity instance on the left does not have any knowlegde about the Humanoid in its project assets. This contrasts with other networking solutions where every instance needs to have the same Unity assets.
comment in response to
post
Quest Pro, although I hate how fast Meta forgot about it, it is the only one where I can keep my glasses on.
comment in response to
post
I would _love_ to make a dive in the NBF with Columbus, but they didn't let me...
comment in response to
post
It is actually mixed reality for robots: the ultrasonic/IR sensors will detect real objects while it collides with virtual objects in the simulation at the same time.
comment in response to
post
Eye behaviour I included in Humanoid Control Pro a while ago. It is based on the same ideas as I am using for the ants now.
comment in response to
post
I have a way to control them :-)
humanoidcontrol.com
comment in response to
post
Fun fact: the ants bite each other when they are close to food. This is because they can't smell each other and the strongest other smell is the food, making them think they bite food.
comment in response to
post
Initial implementation. Every line is a smelled thing. The light lines have focus. The red line is the direction in which the most interesting/focused things can be found. Seems to work well.
comment in response to
post
This is so in line with what I have seen in the experiences for years. When the interaction with virtual objects is realistic, the body acts as if the physical interaction is real. People even experience weight when lifting heavy virtual objects.