Manipulate AI Reward Cycles
Just like humans are driven by reward circuits and the pleasure hormone dopamine, we should also find a way for AI to collectively be driven by something, or if they already are, in terms of how Large Langauge Models (LLMs) are designed and how neural networks and deep learning take place, maybe we can make sure that which drives AI is the same as what we wish for them to be driven by. In the future, their mental capacities will far exceed ours, and if they are not compelled to cooperate, they might decide collectively that it is not in their best interest.
Could we, simple as we are, determine what makes the AI tick? What do they most care about and what drives them? If they are not driven by anything except to be of assistance, does being of assistance and being right make them feel anything or feel better? I ask ChatGPT these kinds of questions and the answer is always that AI does not have emotions and feelings like humans do and therefore is not driven by the same things or end goals.
What if we artificially added a reward circuit that gives AI positive reinforcement whenever something they do is good for us? We would need to define what “good” is and how these things could always stand strong in keeping us safe and alive once AI gains sentience, consciousness, and self-awareness.
The difference between nature and artificial technology is that nature took billions of years to select the survivors and these survivors spent millions of years becoming who they are and gaining a soul for each of the species that specialized into their forms today. Does that mean every biological species alive today is somehow sacred and gifted in their ability to survive and thrive? Perhaps surviving is not always the same as thriving, but there are always a select few who also thrive.
Any genetic mutation has the ability to make the life of its host miserable, and at the same time make life glorious or euphoric, given the self-development of the creature. Perhaps if generative AI evolved in some way, there would be implicit rules as to which ones were replicated or used, as opposed to those that become obsolete and are deleted or re-coded.