A re-activated robot at one point, needs to "reset its vision parameters". That part's reasonably plausible. They're humanoid robots with (presumably) binocular vision. So it's reasonable to assume that they know their own arms' lengths and wave them around a little bit to establish the accuracy of their eyes.
It's also plausible for a robot to re-calibrate its orientation system. Usually most professional robots have multiple orientation sensors working in harmony, which forms a sensor network and the sensors re-calibrate its reading based on its sister-sensor.
Bumping however can cause shift in readings, such as spikes in the acceleration values along the X,Y and Z axis. Most spacial orientation sensors use a reference source which is static, probably mounted cameras on the wall or proximity sensors on walls. Since these sensors are not in any physical state of shock, there is no shift in the spacial orientation co-ordinates but there will be changes in the spacial orientation of the robot if it falls or bounces back for hitting a wall.
It's commonplace for robots that have to navigate in an area to continuously poll its surroundings to make sure that it's actually where it thinks it is. If you use those measurements to produce a map of the environment at the same time, it's known as "Simultaneous Localization and Mapping" or SLAM.
It's also plausible for a robot to re-calibrate its orientation system. Usually most professional robots have multiple orientation sensors working in harmony, which forms a sensor network and the sensors re-calibrate its reading based on its sister-sensor.
Bumping however can cause shift in readings, such as spikes in the acceleration values along the X,Y and Z axis. Most spacial orientation sensors use a reference source which is static, probably mounted cameras on the wall or proximity sensors on walls. Since these sensors are not in any physical state of shock, there is no shift in the spacial orientation co-ordinates but there will be changes in the spacial orientation of the robot if it falls or bounces back for hitting a wall.
It's commonplace for robots that have to navigate in an area to continuously poll its surroundings to make sure that it's actually where it thinks it is. If you use those measurements to produce a map of the environment at the same time, it's known as "Simultaneous Localization and Mapping" or SLAM.
In some ways yes, a humanoid robot can really have internals analogous to the human form. The human frame is quite good at most motions. The spine permits 180 degrees of rotation given flexibility of other internals.
For robots however, there is no need for lungs. Something like a liver/kidneys to filter lubrication, something like a heart to pump that lubrication, and something to replace the digestive tract for power.
Some of these items may be smaller than equivalent functions in humans, others will be larger. Currently, there is no power supply as small, and long lasting as human digestive tract for a robot.
With that said, there would be some obvious advantages in having the main wiring buss centrally located and armored but the "brain" of a robot could just as easily be in it "stomach" or "feet", as it its "head".
For robots however, there is no need for lungs. Something like a liver/kidneys to filter lubrication, something like a heart to pump that lubrication, and something to replace the digestive tract for power.
Some of these items may be smaller than equivalent functions in humans, others will be larger. Currently, there is no power supply as small, and long lasting as human digestive tract for a robot.
With that said, there would be some obvious advantages in having the main wiring buss centrally located and armored but the "brain" of a robot could just as easily be in it "stomach" or "feet", as it its "head".
This episode's title mirrors the phrase "state of the art" meaning the highest level of development of a device, technique or scientific field.
A robot really can't badly malfunction if due to intensity problems, emotions conflict with logic. A common misunderstanding is that robots have emotions. They don't. The "brain" is a bunch of algorithms. Self learning can teach them what emotions are and mimic them. Self-learning is training to detect a pattern. Hence the robot can detect emotion as a pattern and can learn the most accepted response to that emotion. A biological neural network is potentially a "brain" that can exhibit true emotion.