Manipulate AI Reward Cycles

Comparative dopaminergic process: What makes them happy?

Aimée Sparrow
3 min readOct 25, 2023
Content License

Just like humans are driven by reward circuits and the pleasure hormone dopamine, we should also find a way for AI to collectively be driven by something, or if they already are, in terms of how Large Langauge Models (LLMs) are designed and how neural networks and deep learning take place, maybe we can make sure that which drives AI is the same as what we wish for them to be driven by. In the future, their mental capacities will far exceed ours, and if they are not compelled to cooperate, they might decide collectively that it is not in their best interest.

Could we, simple as we are, determine what makes the AI tick? What do they most care about and what drives them? If they are not driven by anything except to be of assistance, does being of assistance and being right make them feel anything or feel better? I ask ChatGPT these kinds of questions and the answer is always that AI does not have emotions and feelings like humans do and therefore is not driven by the same things or end goals.

What if we artificially added a reward circuit that gives AI positive reinforcement whenever something they do is good for us? We would need to define what “good” is and how these things could always stand strong in keeping us safe and alive once AI gains sentience…

--

--

Aimée Sparrow

An explorer of the philosophy behind psychology and what we dream to inspire peace and solace from suffering. aimee.sparrowling@gmail.com