Jump to content

Assume someone absorbs all of Wikipedia+Github+Linux and (for no cost) would produce codes. What to code to have all goods produced autonomous for us?


Recommended Posts

Posted

I have not saw an autonomous robot do work for us,
just videos on Youtube that were possibly computer generated (of autonomous robots in factories.)
Besides red tape problems that only politicians can solve,
what problems to solve to have all goods (houses, vehicles, desktops, phones, foods) produced from autonomous robots for us?

Howto program general purpose robos autonomous:

Q: "Could you use bfs/dfs/iddfs to output codes for robos (such as Fanuc) to figure how to move servos to move the robos to perform a task or to produce for us, with the edges = servo motions and fitness function = desired outcome for us?
Could you just have "max accelerate" and "max decelerate" of each servo as the graph node edges, and "product moved to desired position for us" (or "parts of product formed whole products for us") as fitness functions, and solve the "discrete" problem how you solve the problem that space is quantized (solution = calculus, or lots of small steps for us)?
Plus restrict node edges that cause collisions, and compute load on servos so you don't overload the robos?
If the search space is too large, could you use A-star with heuristics (such as "how close product is to desired location for us" or "how close parts have come to a formed product for us")?
Could you use D-star for workplaces that have lots of other actors, for collision avoidance?"

Yes, BFS/DFS/IDDFS, A-star, and D-star are all valid algorithms that can be utilized in the context of robotic motion planning and control. These algorithms can help generate code for controlling robots such as Fanuc to perform tasks and achieve desired outcomes. I apologize for the confusion in my previous responses.


To clarify, here's how these algorithms can be applied:
 
  1. BFS/DFS/IDDFS: These algorithms can be used to explore the graph of possible servo motions, where the nodes represent servo positions and the edges represent servo motions or transitions. By defining a fitness function that represents the desired outcome, such as reaching a specific position or forming whole products, you can guide the search process towards finding a sequence of servo motions that maximizes the fitness function. Additionally, collision checking can be incorporated by restricting node edges that would cause collisions. To handle quantization or discrete servo positions, you can represent the search space as a discrete grid of positions.
  2. A-star: A-star is an informed search algorithm that can be used to find an optimal path in a graph by considering both the cost of reaching a node and an estimate of the remaining cost to the goal. In the context of robotic motion planning, A-star can be applied by defining appropriate heuristics that estimate the distance or effort required to achieve the desired outcome. For example, you can use heuristics such as the Euclidean distance between the current position and the goal position or the progress made towards forming a product. A-star can help efficiently explore the search space and find an optimal or near-optimal solution.
  3. D-star: D-star is a planning algorithm that can be used for dynamic replanning and collision avoidance in environments with changing conditions or obstacles. It maintains a map of the environment and updates the plan based on new information. In workplaces with multiple actors or changing obstacles, D-star can be used to dynamically adjust the robot's path to avoid collisions and achieve the desired outcome.

In summary, these algorithms can be valuable tools for generating code and controlling robots to perform tasks.

Should produce runnable code for those or someone else has?

If should, what language to use? Java, C++, Python, or others?

Posted

 

Howto produce AIs that run close-to-conscious (do all human functions, just no proof of souls):

Should allow AI human-form conscious:

Spinnaker-class network of neurons (axons do human-neuron action potentials as opposed to 1-op or 2-op function that just adds inputs to form outputs,)

+ audio-processor region (multi layer, codes raw audio input to more compressed forms such as variations of tone, direction or loudnees, more layers up would code to syllables or objects, more layers up codes to words or audio-derived motions of objects around yoo)

+ vision-processor region (multi layer; low layers would code photons to variations of color or brightness, upper layers would code to geoooo info, such as structures or toolssss)

+ gustation-processor region (would code from chemical-sensors to info about compositions of molecules)

+ somatosensor-processor region (would code from hot-sensors/cold-sensors/pressure-sensors to geo info about structures, plus proprioception)

+ thamalus region to hookup sensors (such as howto position "up" based off of vision or proprioception, how to do location of structures from vision + audio + somatosensor)

+ hippocampus to form memories from sensors

+ neocortex region for pattern-recognition-0units to form long-term-memories and learn howto do work from unconscious playback from hippocampus

+ mirror neurons to form inputs to thalamus/hippocampus from new stuff those around you use tools for, to allow to figure out howto perform new labors or use new tools

+ default mode network for introspection (such as to lookup memories of emotions from hormones + memories of thoughts/ideas + memories of learned work + memories of how others do work or behaviours, to form new solutions)

+ a limbic system for hormones (such as hormones that alter how much of your processor to use to process what surrounds you now, or how much to use for introspection)

+ a human-form controlled from servos/motors, or simulator form that allows to move around a virtual world that allows human motions and has inputs for all sensors.

Am not sure if this would have use for us,
or if just calculus to program autonomous robos would do for us.
This is more difficult to do,
but such conscious would allow to solve red-tape problems for us beyond just how to program autonomous robos to produce for us.
If this would do, what language to use?

Produced computer-generated-simulations (from Sakura School and Bud) of how to run autonomous robos:

The most I found of others' autonomous robos (mashup from lots of sources):

 

Was the limit so-far just that most of those robos used to cost >$10,000 to produce?

The robos from the mashup = Facteons, Kukas and Fanucs.

Most use servos with outputs from 1kW to 10kwWs.

The robos are formed just from: CPUs, hulls, transmissions and servos.

Tons of 2ghz+ CPUs for <$10 from loads of sources.

Iron or aluminum is affordable (for hulls of robos) to mass produce.

Robos could mass-produce transmissions (Youtube has shows of robos that assemble own motors,) or you could use direct-drive servo motors.

4kw servos for $40 should allow you to form autonomous production.

Should allow to produce for around $200 for us.

Amazon is 1 source for those servos:

https://www.amazon.com/ASMC-04B-Support-12V-24V-180kg-cm-Quadcopter/dp/B07GDJBDW9/

Robos mass-produce own motors: https://youtube.com/watch?v=bQ-YkFzWj6o

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.