Did we accidentally make computers more human?

Written by Jonathan Anderson & Simon Althoff

Generative AI will undoubtedly transform business, introducing capabilities that were once considered the exclusive domain of human intelligence. However, traditionally, computers have been deterministic machines - systems that produce the same output given the same input. However, the emergence of Large Language Models (LLMs) challenges this, introducing a new paradigm where computers exhibit behavior that seems almost human-like in its variability and adaptability. In a world where humans are still trusting computers to be deterministic, and where businesses are rushing to implement generative AI wherever they can, it is more important than ever to be targeted, thoughtful, well-scoped and armed with clear metrics to track impact and success. 


Short on time?

Read the condensed summary to get a quick glance on how we accidentally made computers more human.


The historical development of generative AI can be traced back to early attempts, in the mid 1960’s, at using rudimentary neural networks and rule-based systems for creative content generation. Pioneering examples like the chatbot ELIZA, which mimicked Rogerian psychotherapy through pattern matching and keyword recognition, laid the groundwork for more sophisticated models. While ELIZA is a significant milestone, there are even earlier attempts at generating creative text using simpler techniques.

The exponential growth in computational power, particularly the development of Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) alongside advancements in Central Processing Units (CPUs), has been instrumental in enabling significant progress in deep learning. This has led to the development of Large Language Models (LLMs), which are trained on massive datasets of text or code. These models can generate human-quality, contextually relevant content across various fields, including natural language processing and code creation while similar generative models can perform tasks like image generation. Notably, the capabilities of LLMs far surpass the limitations of those early generative models.

Key characteristics of generative AI

Generative AI encompasses a range of techniques that enable computers to create entirely new content, data, or creative artifacts. These techniques can involve various approaches, including:

  • Generative Adversarial Networks (GANs): This method pits two neural networks against each other. One network generates content, while the other attempts to distinguish the generated content from real data. This competitive process drives the generation of increasingly realistic outputs, like creating new images that resemble existing styles or generating realistic-looking faces.

  • Variational Autoencoders (VAEs): These models work by learning a compressed representation of the training data (often called latent space). They can then be used to generate entirely new data points that share similar characteristics with the training data. For instance, VAEs could be used to generate new images of furniture that resemble existing styles in a dataset.

  • Autoregressive models: These models excel at predicting the next element in a sequence, like the next word in a sentence or the next note in a musical piece. This allows them to generate coherent and sequential outputs across various formats, including text generation, music composition, and even basic computer code. 

Large Language Models (LLMs) are a powerful type of generative AI that leverage deep learning and are trained on massive amounts of text data. This allows them to learn the statistical relationships between words and phrases, enabling them to generate human-quality text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Here's a closer look at some key characteristics of LLMs:

  • Non-deterministic outputs: Unlike traditional deterministic systems, LLMs can produce slightly different responses for the same prompt due to the use of randomness and probabilities during generation. This "controlled fuzziness" allows for a degree of variability that mimics the nuances of human language.

  • Controlled fuzziness: This variability in outputs allows LLMs to explore different creative possibilities within the boundaries of the data they are trained on. It enables them to generate creative text formats like poems or code, but it's important to remember that this creativity is still rooted in the patterns learned from the training data.

  • Diverse applications: LLMs offer a wide range of applications due to their ability to process and generate text. These applications include tasks like text generation for marketing campaigns, machine translation for communication across languages, and code generation to automate specific programming tasks.


"Generative AI, particularly Large Language Models, represents a significant leap in our ability to simulate human-like understanding and creativity. However, this power comes with the responsibility to carefully manage its integration into our systems. At Algorithma, we believe that the true value of these technologies lies in augmenting human capabilities, not replacing them. By combining human insight with AI's expansive data processing, we can unlock unprecedented potential in business decision-making and customer interaction. But we must always remain vigilant about the variability and biases inherent in these models, ensuring transparency and maintaining robust oversight."

- Jonathan Anderson, CTO of Algorithma


The challenge of fuzziness: a scenario with LLMs in deterministic processes

Imagine a retail store where managers can interact with complex business intelligence reports through a natural language interface powered by an LLM. This allows them to ask questions about in-store data in plain English, such as: "Did relocating the phone case display affect customer traffic in the electronics department?"

LLMs offer an exciting prospect for BI systems, enabling intuitive interaction with data through natural language. This can improve accessibility and empower store managers to gain insights without needing extensive data analysis expertise. However, the non-deterministic nature of LLM outputs presents a key challenge: The same question about product placement might yield slightly different answers from the LLM at different times. This can be due to factors like probabilistic choices and nuances in wording, leading to inconsistencies that can hinder decision-making.

So, while the idea of a fully natural language BI system is intriguing, the limitations of LLMs necessitate a more cautious approach:

  • Misinterpretation of results: The "fuzziness" of LLM outputs can be misinterpreted by store managers, leading to flawed decisions. Inconsistent responses might be mistaken for genuine changes in trends.

  • Lack of explainability: Unlike traditional BI reports, LLMs often don't provide explanations for their answers. This makes it difficult for managers to understand the reasoning behind the LLM's response and assess its reliability.

  • Data bias: LLMs inherit biases from the data they are trained on. This can lead to skewed results in the LLM's responses, potentially impacting decisions about product placement, marketing campaigns, or resource allocation.

Inconsistent responses from LLMs can create confusion and hinder decision-making processes. It can lead to conflicting recommendations about product placement or store layout changes. Such inconsistencies can undermine the stability and reliability that businesses seek in their analytics and reporting systems, necessitating a careful approach to integrating LLMs into business operations. And, since the advent of computers, we, as humans, have been taught that computers provide deterministic outputs - we are simply not used to variability and fuzziness in the responses.  

This is just one example. 

LLM in business: what it means

While the transformative potential of LLMs is undeniable, it is crucial to recognize their limitations and the need for responsible adoption. Over-reliance on LLM outputs without human oversight can lead to flawed decisions. In a recent McKinsey survey nearly one quarter of companies reported challenges with inaccuracies in generative AI. Additionally, the risk of biased data influencing AI outputs and the ethical considerations of deploying AI in sensitive areas must be addressed. Businesses should approach LLM adoption with a balanced perspective, acknowledging both the opportunities and the potential pitfalls.

Change management is a critical aspect of successful LLM implementation. As computers leverage LLM technology, human expectations of their capabilities and limitations need to be recalibrated. 

  • Shifting from deterministic to probabilistic Outputs: Traditionally, computers have been viewed as deterministic machines, providing consistent outputs for the same input. LLMs challenge this notion by introducing probabilistic and variable outputs. Change management efforts should educate users about this shift and emphasize the importance of interpreting LLM outputs within this context.

  • Focus on insights, not definitive answers: LLMs excel at identifying patterns and generating creative text formats, but their outputs should be viewed as insights, not definitive answers. Humans need to understand the limitations of LLM reasoning and the importance of critical thinking when evaluating LLM-generated content.

  • Transparency and explainability: While achieving full explainability for complex AI models like LLMs is an ongoing challenge, businesses should strive for transparency in how LLMs are used. This includes informing users about the potential for bias in LLM outputs and the role of human oversight in the decision-making process.

The successful implementation of LLMs hinges on selecting use cases where the strengths of these models align with business needs. For example, tasks that benefit from pattern recognition in customer behavior data or require creative content generation for marketing campaigns are well-suited for LLMs. Tailored implementation projects, starting with pilot phases and phased rollouts, allow businesses to fine-tune AI deployment, ensuring that the technology meets specific operational requirements and mitigates potential risks. It's also important to emphasize the importance of using high-quality data to train LLMs, as "clean data in, clean results out" applies here as well.

Overall, LLMs hold promise for transforming human-computer interaction. However, a cautious approach that acknowledges their limitations, incorporates change management strategies, and prioritizes responsible AI development is essential. By implementing strategies that combine human and LLM capabilities, businesses can unlock valuable data insights, make informed decisions, and navigate the evolving landscape of human-computer interaction.


"Large Language Models are revolutionizing the way we interact with data by providing intuitive, human-like responses. At Algorithma, we harness this technology to enhance our analytical capabilities and drive innovation. However, it's crucial to remember that the probabilistic nature of AI requires a balanced approach. We must blend the insights and generative capabilities we gain from AI, with human judgment to ensure accuracy and relevance. The future of AI in business lies in this synergy, where technology empowers us to make better decisions while we guide its application with careful and diligent oversight." 

- Simon Althoff, Data Scientist at Algorithma


Key advice and considerations for generative AI in business

While LLMs offer immense potential, responsible and successful adoption requires careful consideration of several key factors:

  • Transparency: Businesses must be transparent about their use of LLMs, including the inherent "fuzziness" in their outputs. Clear communication with stakeholders, including employees and customers, fosters trust and understanding. Everyone involved should be aware of the capabilities and limitations of this technology.

  • Human oversight and "Human-in-the-Loop": Maintaining human oversight throughout the decision-making process is essential. LLM outputs should be viewed as valuable insights that complement, rather than replace, expert judgment. Human validation and refinement of AI-generated content are crucial to ensuring accuracy, relevance, and mitigating potential biases that might be present in the LLM's training data.

  • Continuous learning: The field of generative AI is rapidly evolving. Businesses that embrace a culture of continuous learning will be better positioned to adapt and iterate as new advancements emerge and best practices develop. Staying updated on the latest developments will ensure organizations are at the forefront of AI technology and can fully leverage its capabilities.

  • Selecting the right Use Cases: Careful selection of use cases is vital to maximize the benefits of LLMs. These models excel in tasks that involve:

    • Pattern recognition: LLMs can analyze large datasets to identify trends and patterns such as in customer behavior. This can be helpful for tasks like market research, optimizing product placement in stores, or gaining insights for marketing campaigns.

    • Creative text generation: LLMs can generate different creative text formats, like marketing copy, product descriptions, or even scripts. This can free up human creativity for other tasks and allow for faster content creation.

    • Text summarization: LLMs can provide concise summaries of complex documents or reports. This can save time for busy professionals and improve information accessibility.

By focusing on these areas where LLMs shine and implementing strategies that combine human and LLM capabilities, businesses can unlock valuable data insights, make informed decisions, and navigate the evolving landscape of human-computer interaction. 

Did we accidentally make computers more human? Not quite.

Generative AI and LLMs are impressive. LLMs understand and generate human language, creating a more natural interaction than ever before. In that sense, they seem more human-like. However, LLMs lack the consistency and definitive reasoning of humans. Their outputs are "fuzzy," meaning they can vary slightly for the same prompt.

This "fuzziness" isn't a mistake, but rather a reflection of complexity. Instead of accidentally making computers more human, we've created powerful tools that complement human strengths. By working together with LLMs, we can leverage their insights and creative capabilities, while human oversight ensures accuracy and avoids biases. So, LLMs are a new chapter in human-computer interaction, not a replacement for human intelligence. They offer a powerful partnership that unlocks a future of better decision-making and a more natural way to interact with data.

Just be smart about where and how to use them.

Previous
Previous

Why information retrieval systems are foundational for trustworthy and factual application of generative AI

Next
Next

Power up your AI with serverless: Scalability, security, speed, and cost efficiency