Complete library

The complete collection of all our insights, offering a comprehensive range of articles spanning foundational concepts to advanced strategies, providing guidance and inspiration for every stage of the algorithmic business journey.

Frida Holzhausen Frida Holzhausen

The cost of data: A critical hurdle for Co-pilot implementation

Organizations are increasingly turning to AI tools like Microsoft Co-Pilot to enhance productivity and streamline workflows. Designed to work seamlessly within the Microsoft 365 ecosystem, Co-Pilot enables smarter collaboration, faster data access, and automation of routine tasks. While the potential benefits are substantial, successful implementation requires careful planning to navigate challenges such as data preparation and governance.

Read More
Building algorithmic solutions Frida Holzhausen Building algorithmic solutions Frida Holzhausen

CTO update: The DSPY framework to automate and control LLM behavoir

In this update, Jonathan Anderson (our CTO) explains the new DSPY framework, designed to simplify and strengthen control over large language models (LLMs). LLMs, while transformative, can be unpredictable, often behaving like “black boxes.” DSPY addresses this by offering a structured approach to interaction, reducing the need for prompt tuning and making model behavior consistent and predictable.

Read More
Building algorithmic solutions Frida Holzhausen Building algorithmic solutions Frida Holzhausen

Why information retrieval systems are foundational for trustworthy and factual application of generative AI

More and more companies are relying on the analytical and generative capabilities of LLMs and other generative models in their day to day activities. Simultaneously there are growing concerns about how factual errors and underlying biases produced by these models may have negative consequences.

Read More
Building the foundation Jonathan Anderson Building the foundation Jonathan Anderson

Did we accidentally make computers more human?

Traditionally, computers have been deterministic machines - systems that produce the same output given the same input. However, the emergence of Large Language Models (LLMs) challenges this, introducing a new paradigm where computers exhibit behavior that seems almost human-like in its variability and adaptability. In a world where humans are still trusting computers to be deterministic, and where businesses are rushing to implement generative AI wherever they can, it is more important than ever to be targeted, thoughtful, well-scoped and armed with clear metrics to track impact and success.

Read More
Ensuring responsible AI Simon Althoff Ensuring responsible AI Simon Althoff

Large language models: Power, potential, and the sustainability challenge

Large language models (LLMs) have revolutionized how we interact with machines, enabling tasks such as text generation, translation, and question answering. However, these features come at a cost, as LLMs require high amounts of computational power both for training and inference. Transformer models, which LLMs are built on, have simultaneously increased in size since their inception and the trend seems to continue due to the clear performance benefit. With widespread adoption of LLMs thus comes concerns about environmental impact, contradictory to most companies’ sustainability agendas to reach the SBTi targets. 

Read More