Swudu Susuwu Posted February 3 Posted February 3 Should allow AI human-form conscious: Spinnaker-class network of neurons (axons do human-neuron action potentials as opposed to 1-op or 2-op function that just adds inputs to form outputs,) + audio-processor region (multi layer, codes raw audio input to more compressed forms such as variations of tone, direction or loudnees, more layers up would code to syllables or objects, more layers up codes to words or audio-derived motions of objects around yoo) + vision-processor region (multi layer; low layers would code photons to variations of color or brightness, upper layers would code to geoooo info, such as structures or toolssss) + gustation-processor region (would code from chemical-sensors to info about compositions of molecules) + somatosensor-processor region (would code from hot-sensors/cold-sensors/pressure-sensors to geo info about structures, plus proprioception) + thamalus region to hookup sensors (such as howto position "up" based off of vision or proprioception, how to do location of structures from vision + audio + somatosensor) + hippocampus to form memories from sensors + neocortex region for pattern-recognition-0units to form long-term-memories and learn howto do work from unconscious playback from hippocampus + mirror neurons to form inputs to thalamus/hippocampus from new stuff those around you use tools for, to allow to figure out howto perform new labors or use new tools + default mode network for introspection (such as to lookup memories of emotions from hormones + memories of thoughts/ideas + memories of learned work + memories of how others do work or behaviours, to form new solutions) + a limbic system for hormones (such as hormones that alter how much of your processor to use to process what surrounds you now, or how much to use for introspection) + a human-form controlled from servos/motors, or simulator form that allows to move around a virtual world that allows human motions and has inputs for all sensors. Purposes/uses: allows autonomous robots to produce complex goods for us (Fanucs/Kukas/Facteons are limited to simple macros,) allows more good simulators to do decisions/solve problems for us, allows AI to form/run schools for us. -2
Swudu Susuwu Posted March 9 Author Posted March 9 So far have not saw one autonomous robot do work for us What lacks for autonomous robots to produce for us? Over half of the population is dying from dehydration, starvation, or lack of houses Would produce the programs at no cost, just so the whole world doesn't keep going to hell but rather has some values Have searched but found no relevant results
DanMP Posted March 9 Posted March 9 16 minutes ago, Swudu Susuwu said: Would produce the programs at no cost, Who would? You? Autonomous robots to work for us is not a bad idea, especially when AI will be 1, able to control/maintain fusion (I recently read something about that), in order to have plenty and cheap energy, and 2, able to safely control such a robot.
Swudu Susuwu Posted March 17 Author Posted March 17 On 3/9/2024 at 8:32 AM, DanMP said: Who would? You? Autonomous robots to work for us is not a bad idea, especially when AI will be 1, able to control/maintain fusion (I recently read something about that), in order to have plenty and cheap energy, and 2, able to safely control such a robot. Yes. Have produced C and C++ implementations of all the major graph search algorithms (DFS/BFS/IDDFS/Simulated Annealing/BeamStack/A*/D*) and memorized most programming languages, and have lots of ideas on how to use map the continuous spaces of work robots to a series of graph nodes, with servo motions as edges. Have also absorbed thousands of pages of texts about artificial neural networks plus the human CNS, plus tons of FLOSS artificial neural network source codes, plus source codes for continuous feedback loops that control robots. All of these are suitable for robots with multiple arms to avoid self-collisions plus produce goods. Swarms of small robots are programmable as you would a huge robot with multiple arms to avoid self-collisions, so for jobs that require multiple human workers (not just to go fast, but because the nature of some jobs requires multiple human workers,) could do with just robots. Lots of FLOSS codes allow robots to merge inputs from multiple sensors to maneuver as good as humans can, some of these are based on artificial neural networks (based off of natural CNS,) some are just pure calculus. As far as know, the barrier to full robot work forces is just political; all of the movies of killer robots have most of us scared of robots, but as a solution you could just produce lots of small, soft robots just durable enough to withstand falls from scaffolds. Have poured over lots of texts of networks, could produce swarms of small robots that perform as slaves plus masters, where all the worker robots just use local CPUs to process raw data from sensors to produce small (low-bandwidth, compressed) maps of surroundings and broadcast those maps to the masters to plan paths for the robots, and the masters would just process those maps (plus goals) to produce outputs to broadcast to the worker robots. Could have one master, or for failover could have multiple masters, or could have robots broadcast to eachother (as peer-to-peer nodes, where each robot has full access to plans for goals,) or where an arbitrary robot is master (with advance setup of what robots to failover to as new masters.) The "democratic" solution of "all robots broadcast compressed data from sensors to all other robots and use local resources to plan their own paths/motions" uses more bandwidth and processor power, and has more issues such as synchronization errors, so is the least preferred of options, but could do this. Latest list of resources (and ideas) for autonomous robots (includes responses to replies from this plus lots of other forums): https://swudususuwu.substack.com/p/program-general-purpose-robots-autonomous Resources for artificial neurons/CNS: https://swudususuwu.substack.com/p/albatross-performs-lots-of-neural Examples of goals for robots: blueprint scans or AutoCad outputs, suchas houses. For graph searches; graph nodes = positions of pieces of wood, bricks or stones, edges = robot's motions. For artificial CNS; CNS training inputs = blueprint scans or AutoCad outputs, training outputs = data (from sensors) of humans that produce goods from those blueprint-scans/AutoCad-outputs, production uses for CNS; input scans to CNS, and CNS outputs motions for robots to produce those goods.
Swudu Susuwu Posted March 18 Author Posted March 18 (edited) [quote="https://swudususuwu.substack.com/p/albatross-performs-lots-of-neural"]Possible to produce general purpose robos as autonomous tools, + close-to-human consciousness Kuka/Facteon/Fanuc can't work outdoors or as carpenters because Kuka/Facteon/Fanuc was just macros. Use DFS/BFS/IDDFS to run close to continuous functions as calculus General artificial intelligence allows autonomous robots to produce complex goods DFS/BFS/IDDFS pseudocodes/calculus: https://www.geeksforgeeks.org/iterative-deepening-searchids-iterative-deepening-depth-first-searchiddfs/ What problems to solve to have all goods (houses, vehicles, desktops, phones, foods) produced from autonomous robots for us? Howto program general purpose robos autonomous: Q: "Could you use bfs/dfs/iddfs to output codes for robos (such as Fanuc) to figure how to move servos to move the robos to perform a task or to produce for us, with the edges = servo motions and fitness function = desired outcome for us? Could you just have "max accelerate" and "max decelerate" of each servo as the graph node edges, and "product moved to desired position for us" (or "parts of product formed whole products for us") as fitness functions, and solve the "discrete" problem how you solve the problem that space is quantized (solution = calculus, or lots of small steps for us)? Plus prune node edges that cause collisions, and compute load on servos so you don't overload the robos? If the search space is too large, could you use A-star with heuristics (such as "how close product is to desired location for us" or "how close parts have come to a formed product for us")? Could you use D-star for workplaces that have lots of other actors, for collision avoidance?" Yes, BFS/DFS/IDDFS, A-star, and D-star are all suitable algorithms you can utilize to plan and control moves of robots. These algorithms can help produce code to control robots such as Fanuc to perform tasks and achieve efficacious outcomes. I apologize for the confusion in my previous responses. To summarize, here's how to use these formulas: BFS/DFS/IDDFS: You can use these algorithms to explore the graph of possible servo motions, where the nodes symbolize servo positions and the edges symbolize servo motions or transitions. Once you use a fitness function that represents suitable outcomes, such as reaching a specific position or forming whole products, you can guide the search process towards finding a sequence of servo motions that maximizes the fitness functions. Plus, collision checks can prune node edges that would cause collisions. To get around quantization or discrete servo positions, you can produce the search space as a discrete grid of positions that is almost continuous. A-star: A-star is an informed search algorithm that can be used to find an optimal path in a graph by considering both the cost of reaching a node and an approximation of the remainder of cost to goals. To plan robot motions, A-star can use heuristics that guess the distance or effort to move to goals. For example, you can use heuristics such as the Euclidean distance (from the current positions to the goal positions, or the progress towards forming products.) A-star can help to explore the search space fast for optimal or near-optimal solutions. D-star: D-star is a calculus you can use for collision avoidance through areas with lots of changes or obstacles. D-star forms a map of the area (plus obstacles) and updates the plan based on new sensor inputs. For workplaces with multiple actors or changing obstacles, D-star is used to adjust the robot's path to bypass collisions to goto the goals. To summarize, you can use these formulas to produce code that controls robots to perform tasks. Of others' autonomous robots (mashup from lots of sources) was simple macros (suchas Grasshopper's at https://grasshopperdocs.com/addons/robots.html and Kuka's at https://www.kuka.com/en-us/products/robotics-systems/software/simulation-planning-optimization/kuka_sim) : https://www.youtube.com/watch?v=hLDbRm-98cs Was the problem so-far just that most of those robots used to cost over 10,000 dollars to produce? The robots from the mashup = Facteons/Kukas/Fanucs, most use servos with outputs from 1kW to 12kWs. The robots are formed just from: CPUs, hulls, transmissions and servos. Tons of 2ghz+ CPUs (for under 10 dollars) from lots of sources Iron or aluminum is affordable (for hulls of robos) to mass produce. Robos could mass-produce transmissions (Youtube has shows of robos that assemble own motors,) or you could use direct-drive servo motors. 4kw servos for 52 dollars should allow you to form autonomous production. Should allow to produce for around 266 dollars for us. (No affiliations) Amazon has sources for those servos suchas: https://www.amazon.com/ASMC-04B-Support-12V-24V-180kg-cm-Quadcopter/dp/B07GDJBDW9/ (No affiliations) Robots mass-produce own motors: https://www.youtube.com/watch?v=bQ-YkFzWj6o Examples of howto setup APXR as artificial CNS; https://github.com/Rober-t/apxr_run/blob/master/src/examples/ Examples of howto setup HSOM as artificial CNS; https://github.com/CarsonScott/HSOM/tree/master/examples[/quote] From Substack (CNS): Quote Simple artifical neural networks: (FLOSS/unlicensed) https://github.com/topics/artificial-neural-network https://learn.microsoft.com/en-us/archive/msdn-magazine/2019/april/artificially-intelligent-how-do-neural-networks-learn https://www.freecodecamp.org/news/building-a-neural-network-from-scratch/ https://interestingengineering.com/innovation/artificial-neural-network-black-box-ai https://www.javatpoint.com/artificial-neural-network https://wiki.pathmind.com/neural-network No affiliations to Github authors: Simple Python artificial neural networks/maps (FLOSS): https://github.com/CarsonScott/HSOM Various FLOSS neural network activation functions (absolute, average, standard deviation, sqrt, sin, tanh, log, sigmoid, cos), plus sensor functions (vector difference, quadratic, multiquadric, saturation [+D-zone], gaussian, cartesian/planar/polar distances): https://github.com/Rober-t/apxr_run/blob/master/src/lib/functions.erl Various FLOSS neuroplastic functions (self-modulation, Hebbian function, Oja's function): https://github.com/Rober-t/apxr_run/blob/master/src/lib/plasticity.erl Various FLOSS neural network input aggregator functions (dot product, diff product, mult product): https://github.com/Rober-t/apxr_run/blob/master/src/agent_mgr/signal_aggregator.erl Various simulated-annealing functions for artificial neural networks (dynamic [+ random], active [+ random], current [+ random], all [+ random]): https://github.com/Rober-t/apxr_run/blob/master/src/lib/tuning_selection.erl Simple to convert Erlang functions to Java/C++ to reuse for fast programs, the syntax is close to Lisp's What sort of natural neural networks to base artificial neural networks off of for best performance, how various classes of natural neural networks differ performance-wise, how non-commercial (FLOSS) neural networks compare performance-wise, what languages are best for neural networks performance-wise, and how to put artificial neural networks to best use for us (although Tesla, Kuka, Fanuc and Fujitsu have produced simple macros for simple robots to mass-produce for us, lots of labor is still not finished as full-autonomous) CPUs can use less than a second to count to 2^32, and can do all computations humans can with just 2 numbers. From some neuroscientists: humans (plus other animals) use quantum algorithms for natural neural networks. The closest to this computers do is Grover's Algorithm (https://wikipedia.org/wiki/Grover's_algorithm) As artificial quantum computers continue to cost less and less (Microsoft Azure allows free cloud access to quantum compute,) this should allow more fast artificial neural networks. As opposed to lab-robots that run simple macros, how to produce robots small/soft enough that humans would not fear robots that work outdoors to produce for us, and with enough intelligence to watch humans plant crops or produce houses and figure out how to use swarm intelligences to plant crops/produce houses for us full-autonomous? Edited March 18 by Swudu Susuwu
Swudu Susuwu Posted March 18 Author Posted March 18 Servo's/stepper-motor's encoders (as sensors) pass data of joint-angles (node-spaces) to computers. Back-EMF (resistances) pass costs of edges (moves,) to allow to plan lower-cost routes.
Swudu Susuwu Posted March 20 Author Posted March 20 (edited) Just as human's (billions of neurons plus hundreds of layers of cortices) CNS can parse old blueprints (plus watch other humans use those old blueprints to produce goods such as houses) -- to setup our synapses to parse new blueprints to figure out how to use tools to produce new goods -- so too could robot's (trillions of artificial neurons plus thousands of layers of cortices) artificial CNS do this. Just as humans can watch amateurs produce prototypes (plus parse blueprints later based off of those prototypes to mass produce goods) -- to setup the synapses of our CNS to allow us to produce new blueprints from new examples (to mass produce new goods) -- so too could an artificial CNS do this. Edited March 20 by Swudu Susuwu
Recommended Posts