Exploring the parallels and differences between psychology and machine learning
Disclaimers:
- This blog post was co-authored with my friend, Dee Penco, a certified therapist and counsellor.
- For the past 2.5+ years, Dee and I have spent hours and hours discussing and reasoning human behaviours, which has sparked my passion for psychology. With time, we cross-shared ideas about psychology and ML/AI modelling concepts.
- While I found parallels between these fields, I asked Dee if she could explain how humans function and make decisions. Being intrigued by the rise of artificial general intelligence (AGI), my question was: Do humans follow the same steps in generating outputs as machine learning does, and how feasible is it to mimic human-like decision-making?
- To my great joy, Dee decided to write this post with me, to share a dual perspective on how humans vs. ML models create outputs.
- In the text below, we sometimes interchange AI and ML as terms, but we understand they are not the same, and that ML is a subset of AI.
The year 2024 was big in recognizing machine learning and artificial intelligence contributions.
The Nobel Prize in Chemistry was awarded for advancements in protein science: David Baker for creating new kinds of proteins, alongside Demis Hassabis and John Jumper for developing an AI model that solved a 50-year-old challenge of predicting proteins’ complex structures.
Furthermore, John Hopfield and Geoffrey Hinton were awarded the Nobel Prize in Physics for their work on artificial neural networks, brainlike models capable of recognizing patterns and producing outcomes that resemble human decision-making processes.
Although artificial intelligence increasingly and more accurately models human problem-solving and decision-making, the mechanisms behind human cognition still need to be fully understood.
The psychology of human (re-)action involves complex interconnected dimensions, shaped by layers of conscious and subconscious factors.
— So, what sets human and ML/AI models apart in generating outputs?
To address this question, let’s explore these two worlds — psychology and machine learning — and uncover the connections that shape how humans and human-created AI models produce outputs.
The aims of this post are:
- Bring high-level professional psychology explanations closer to technical readers on what affects human decision-making.
- Showcase the high-level machine learning (ML) modelling process and explain how ML models generate outputs to non-technical professionals.
- Identify the differences and similarities between the two processes—the human and the machine—that produce outputs.
Psychology aspect: How do humans generate outputs? | By Dee
Before I start writing this section, I want to emphasize that every psychologist worldwide would be thrilled if there were a way for people to function more simply — or at least as simple as AI or ML models.
Experts in AI and ML are probably appalled by what I just said because I implied that “AI and ML are simple.”
— But that wasn’t my intention.
I only mean to emphasize how much simpler these models are compared to the complexities of humans.
When Marina was explaining, at a high level, how machine learning modelling works — I couldn’t help but think:
If we could “reduce” humans to this “straightforward” methodology, we would cure most psychological problems, transform lives for the better, and dramatically improve overall population wellbeing.
Imagine if a person could receive input(s), pass them to some internal algorithms that could determine the weight, importance, and quality of that input, make the most likely prediction, and, based on that, produce a controlled output — thought, emotion, or behaviour.
But unlike ML or AI, the human mind processes information in far more complex ways, influenced by numerous interconnected factors.
From this point, I’ll stop speculating about what happens within an AI or ML model and explain the “human” modelling flow.
To illustrate the concept, I will discuss several factors influencing humans’ decision-making.
For this, I invite you to imagine a person as a “black box pre-trained model”. In other words, a model already comes pre-loaded with knowledge patterns, and weights learned in the training step.
These patterns and weights or factors vary from person to person and are known as:
- (1) Intelligence and IQ
- (2) Emotional world and EQ
- (3) Conscious world — what the “model” has learned so far: values, experiences, purpose
- (4) Unconscious and subconscious world — what the “model” learned and repressed so far — short-term/long-term memory + (again) values, experiences, purpose
- (5) Genetic predispositions — what we’re born with
- (6) Environment — social, cultural, physical
- (7) Physiological needs — Maslow’s Hierarchy (Hierarchy of Needs)
- (8) Hormonal + physiological status — Neurobiology, Endocrine System, Arousal
- (9) Decision-making centres — Id, Ego, Superego that separate entities within us
- (10) Intuition creativity — which can be considered part of the above-grouped variables or separate entities on their own (Intuition, Divergent Thinking, Flow State)
So far, we have identified 10 factors that vary for each person.
I want to emphasize that they are all interconnected and sometimes so “fused” together that they can even be compounded.
In addition, each one can be thicker or thinner and may contain “particles”, or information that is either predominant or deficient.
- For example, the hormonal factors may have a predominant hormone (such as serotonin, which affects mood; cortisol in response to stress; dopamine, which is essential for excitement, etc.). The intellectual factor can be higher or lower.
Now imagine that there is some algorithm inside the person constantly rearranging the order of factor importance, so one may sometimes end up in the front, sometimes in the middle, and sometimes in the back.
- Let’s take the physiological needs — hunger for example. If you place it right up front, it will dictate which information reaches the second, third, fourth, etc.; the output will depend on that.
In other words, if you make decisions while hungry, the output will probably not be the same as when you would have your stomach full.
📌 The factors are ordered by importance, with the most important at the specific moment always in the first position, then the second, and so on.
What I just described briefly above is how humans operate when they receive input information under specific circumstances.
🙋🏽♀️ Returning to the intro thought again, did I explain why psychologists everywhere long for a more straightforward human experience? Why would we, on this side, actually be happy if people were as “complicated” as ML?
- Consider only how many psychological problems could be resolved — given that psychological factors are fundamental to everything — and how many other issues around us would be resolved if only we could “tune” or “reset” our factors as easily as in the ML modelling.
— But, what can you do to control your outputs better?
Remember the part I mentioned above on the “thickness” of your factors? Well, since the thickness of your factors and their order determine your output (the emotion you feel, the thought you form, and the reaction to information), it’s useful to know that you can absolutely thicken some of these factors to your advantage and hold them firmly in the first position.
I’ll simplify this again with several examples:
You can probably (and hopefully) ensure you’re never hungry. You can work on regulating your hormones. In therapy, you can address your fears and work to eliminate them. You can adjust deeply rooted beliefs stored in your subconscious, and so on.👉🏼We (You) can indeed do this, and it’s a pity more people don’t work on adjusting their factors.
And, for now, let’s leave it at that.
Until some form of AI figures out a more efficient way to do this or instead of us, we’ll continue to make certain decisions, feel certain emotions, and take specific actions as we do now.
Machine Learning aspect: How do models generate outputs? | By Marina
When Dee talked about the “human black box” with pre-trained patterns, I couldn’t help but think about how closely that parallels the machine learning process. Just as humans have multiple interconnected factors influencing their decisions, ML models have their version of this complexity.
So, what is Machine Learning?
It is a subset of AI that allows machines to learn from past data (or historical data) and then make predictions or decisions on new data records without being explicitly programmed for every possible scenario.
With this said, some of the more common ML “scenarios” are:
- Forecasting or Regression (e.g., predicting house prices)
- Classification (e.g., labelling images of cats and dogs)
- Clustering (e.g., finding groups of customers by analyzing their shopping habits)
- Anomaly Detection (e.g., finding outliers in your transactions for fraud analysis)
Or, to exemplify these scenarios with our human cognitive daily tasks, we also predict (e.g., will it rain today?), classify (e.g., is that a friend or stranger?), and detect anomalies (e.g., the cheese that went bad in our fridge). The difference lies in how we process these tasks and which inputs or data we have (e.g., the presence of clouds vs. a bright, clear sky).
So, data (and its quality) is always at the core of producing quality model outcomes from the above scenarios.
Data: The Core “Input”
Similar to humans, who gather multimodal sensory inputs from various sources (e.g., videos from YouTube, music coming from radio, blog posts from Medium, financial records from Excel sheets, etc.), ML models rely on data that can be:
- Structured (like rows in a spreadsheet)
- Semi-structured (JSON, XML files)
- Unstructured (images, PDF documents, free-form text, audio, etc.)
Because data fuels every insight an ML model produces, we (data professionals) spend a substantial amount of time preparing it — often cited as 50–70% of the overall ML project effort.
This preparation phase gives ML models a taste of the “filtering and pre-processing” that humans do naturally.
We look for outliers, handle missing values and duplicates, remove part of the inputs (features) unnecessary features, or create new ones.
Except for the above-listed tasks, we can additionally “tune” the data inputs. — Remember how Dee mentioned factors being “thicker” or “thinner”? — In ML, we achieve something similar through feature engineering and weight assignments, though fully in a mathematical way.
In summary, we are “organizing” the data inputs so the model can “learn” from clean, high-quality data, yielding more reliable model outputs.
Modelling: Training and Testing
While humans can learn and adapt their “factor weights” through deliberate practices, as Dee described, ML models have a similarly structured learning process.
Once our data is in good shape, we feed it into ML algorithms (like neural networks, decision trees, or ensemble methods).
In a typical supervised learning setup, the algorithm sees examples labelled with the correct answers (like a thousand images labelled “cat” or “dog”).
It then adjusts its internal weights — its version of “importance factors”— to match (predict) those labels as accurately as possible. In other words, the trained model might assign a probability score indicating how likely each new image is a “cat” or a “dog”, based on the learned patterns.
This is where ML is more “straightforward” than the human mind: the model’s outputs come from a defined process of summing up weighted inputs, while humans shuffle around multiple factors — like hormones, subconscious biases, or immediate physical needs — making our internal process far less transparent.
So, the two core phases in model building are:
- Training: The model is shown the labelled data. It “learns” patterns linking inputs (image features, for example) to outputs (the correct pet label).
- Testing: We evaluate the model on new, unseen data (new images of cats and dogs) to gauge how well it generalizes. If it consistently mislabels certain images, we might tweak parameters or gather more training examples to improve the accuracy of generated outputs.
As it all comes back to the data, it’s relevant to mention that there can be more to the modelling part, especially if we have “imbalanced data.”
For example: if the training set has 5,000 dog images but only 1,000 cat images, the model might lean toward predicting dogs more often — unless we apply special techniques to address the “imbalance”. But this is a story that would call for a fully new post.
The idea behind this mention is that the number of examples in the input dataset for each possible outcome (the image “cat” or “dog”) influences the complexity of the model’s training process and its output accuracy.
Ongoing Adjustments and the Human Factor
However, despite its seeming straightforwardness, an ML pipeline isn’t “fire-and-forget”.
When the model’s predictions start drifting off track (maybe because new data has changed the scenario), we retrain and fine-tune the system.
Again, the data professionals behind the scenes need to decide how to clean or enrich data and re-tune the model parameters to improve model performance metrics.
That’s the“re-learning” in machine learning.
This is important because bias and errors in data or models can ripple through to flawed outputs and have real-life consequences. For instance, a credit-scoring model trained on biased historical data might systematically lower scores for certain demographic groups, leading to unfair denial of loans or financial opportunities.
In essence, humans still drive the feedback loop of the improvement in training machines, shaping how the ML/AI model evolves and “behaves”.
Concluding remarks
In this post, we’ve explored how humans generate outputs, influenced by at least ten major interwoven factors, and how ML models produce results through data-driven algorithms.
Although machines and humans don’t derive outcomes in the same way, the core ideas are remarkably similar.
- For humans, the process is intuitive: we receive and collect sensory inputs and store them in various forms of memory. Our cognitive processes then combine logic, emotions, hormones, past experiences, in-the-moment external inputs, etc., to produce actions or behaviours. As Dee illustrated, this unique combination of factors creates our human ability to be both predictable and surprising, rational and emotional, all at once.
- For ML or AI, the process flow is data-based. We provide structured, semi-structured, or unstructured data, store, clean, label, or enrich them, model them with ML/AI algorithms, and then let the developed model generate predictions, decisions, or recommendations. Yet, as shown above, even this seemingly straightforward pipeline requires ongoing human oversight and adaptation to new scenarios.
The key difference lies in the unpredictability of human mental processes, versus the more trackable, parameterized nature of ML/AI models.
And while most of us wonder what the future brings — what will happen when AI reaches AGI and possibly replaces our jobs with its “superhuman” abilities — Dee shared an interesting perspective on that point:
What if AI starts following our path? — Developing emotions, cluttering its memory with conflicting or harmful data, somehow creating an Id, and becoming unpredictable.
Hmm… You weren’t expecting that, were you? — The outcome doesn’t necessarily have to be that AI will surpass us. — Maybe we’ll surpass AI, which opens up another question: do human traits (factors) necessarily evolve…? Because maybe that’s the future of AI.
Until then, it’s important to recognize both human and machine models can learn, adapt, and change their outputs over time, given the right inputs and a willingness to keep refining the process.
We can work on improving the quality of outputs by selecting better inputs and refining how they’re processed — whether in guided sessions with counsellors or ML/AI data and algorithm designs.
✨ Thank you for diving into our post! ✨
For a fresh psychology perspective, subscribe to Dee’s posts👇🏼
For data trends and tips, don’t miss Marina’s posts👇🏼
Human Minds vs. Machine Learning Models was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Add to favorites
0 Comments