Complete library
The complete collection of all our insights, offering a comprehensive range of articles spanning foundational concepts to advanced strategies, providing guidance and inspiration for every stage of the algorithmic business journey.
AI model evaluation: bridging technical metrics and business impact
Evaluating AI models goes beyond simplistic performance metrics; it is a nuanced strategic journey that requires everyone who is involved in AI development; data scientists, stakeholders, project leaders and subject matter experts all need to understand that a single accuracy score can be misleading, and that quantifiable error measures is just the first step of the model evaluation process. A comprehensive approach is essential to assess real-world implications, identify potential biases, and appreciate the complex interplay between technical capabilities and business impact.
From concept to impact: 10 steps for AI value-creation
Businesses are realizing that proving AI can work is no longer enough. To succeed, AI initiatives must deliver measurable value and remain adaptable to long-term needs. The shift from proof of concept (PoC) to proof of value (PoV) represents a fundamental change—one that emphasizes outcomes over feasibility and ensures AI solutions address real business challenges.
How a machine learning model is trained
As AI is taking a larger role in society and public discourse, and more and more people are being exposed to it, it is ever more important for everyone to understand how AI works. Understanding how Machine learning models, the most prominent type of models within AI, are trained, results in a much better grasp of the capabilities and limitations of AI. This article gives a high level explanation of how machine learning models are trained and what this means for data science projects.
Defining success: A guide to effective problem formulation in data science
In data science, the formulation of the problem is a critical step that significantly influences the success of any project. Properly defining the problem not only sets the direction for the entire analytical process but also shapes the choice of methodologies, data collection strategies, and ultimately, the interpretation of results. For data scientists, a well-formulated problem helps in honing in on the right questions to ask, allowing them to design experiments and models that are aligned with business objectives. It ensures that the analytical effort is relevant and impactful, leading to actionable insights rather than merely technical achievements.
The Nobel Prize in Physics 2024: Neural networks inspired by physical systems
The 2024 Nobel Prize in Physics highlights groundbreaking work done by John J. Hopfield and Geoffrey E. Hinton on neural networks, where they developed models like the Hopfield Network and the Boltzmann Machine, inspired by the behavior of physical systems. Their pioneering work in the 1980s laid the foundation for the machine learning revolution that took off around 2010. This award celebrates their contributions to the foundational technologies driving modern machine learning and artificial intelligence. The exponential growth in available data and computing power enabled the development of today’s artificial neural networks, often deep, multi-layered structures trained using deep learning methods. In this article we will dive into their discoveries and explain how these breakthroughs have become central in AI applications.
Taking a first look at the Open AI o1 preview model
In this video Simon, one of our Data Scientists, takes a first look at the o1 preview model. OpenAI says the model is part of a series of reasoning models for solving hard problems. With claims that performance is on par with PhD students within fields like physics and chemistry within some benchmarks. To test it, Simon gives it a math problem, and quickly analyzes its solution. Based on this, Simon discusses potential strengths and weaknesses of the model and gives an overall impression with thoughts about the future potential of this type of model.
The quantum advantage: How quantum computing will transform machine learning
Machine learning (ML) is currently transforming various fields, such as healthcare, finance, and creative industries. However, as data and problems become more complex, classical computing struggles to scale ML algorithms efficiently. Key challenges include the time and computational resources needed to train models on large datasets, optimize deep learning architectures, and perform tasks like data classification and clustering. These limitations drive interest in exploring quantum computing.
Did we accidentally make computers more human?
Traditionally, computers have been deterministic machines - systems that produce the same output given the same input. However, the emergence of Large Language Models (LLMs) challenges this, introducing a new paradigm where computers exhibit behavior that seems almost human-like in its variability and adaptability. In a world where humans are still trusting computers to be deterministic, and where businesses are rushing to implement generative AI wherever they can, it is more important than ever to be targeted, thoughtful, well-scoped and armed with clear metrics to track impact and success.
CTO Update: Training LLMs on ROCm platform
At Algorithma, we're constantly pushing the boundaries of Large Language Models (LLMs). In this CTO update, Jonathan explores the exciting potential of AMD's ROCm software platform and the next-gen MI300x accelerators for powering these models.
Creating certain in uncertainty: Ensuring robust and reliable AI models through uncertainty quantification
AI is often seen as black-box complexity, but what if the answer to your problem lies not in sophisticated algorithms, but in simpler approaches? At Algorithma, we champion the power of naive models. Often overlooked due to their basic nature, they offer a surprising set of advantages that can be incredibly valuable for businesses of all sizes.
Why naive models are still relevant in the age of complex AI
AI is often seen as black-box complexity, but what if the answer to your problem lies not in sophisticated algorithms, but in simpler approaches? At Algorithma, we champion the power of naive models. Often overlooked due to their basic nature, they offer a surprising set of advantages that can be incredibly valuable for businesses of all sizes.
Laying the foundation: Data infrastructure is instrumental for successful AI projects
Data infrastructure is the backbone for enabling successful artificial intelligence projects. It consists of the ecosystem of technologies and processes that govern how businesses and organizations collect, store, manage, and analyze the operational data that fuels its AI initiatives. Without a robust data infrastructure, driving successful AI initiatives becomes almost impossible – your journey will likely grind to a halt after a few implementations.
Unlocking the potential of LiDAR: Leveraging AI to bring 3D vision to life
Imagine a world where security cameras not only show what's happening, but also precisely measure distances and object sizes. This futuristic vision is becoming a reality with LiDAR (Light Detection and Ranging) technology. With the ability to measure objects in 3D space, LiDAR holds immense value, especially in security applications where personnel could instantly determine the size and distance of a potential intruder. This area is rapidly developing but still advancements have to be made before these 3D environments become so lifelike that they can be integrated into consumer products. Could AI perhaps be the solution?
Building an on-premise AI infrastructure: key considerations
We invite you to explore the strategic possibilities of on-premise AI infrastructure in our new white paper. This white paper dives deeper into the advantages, practical considerations, and how to build a future-ready on-premise AI infrastructure solution for your organization.
Federated machine learning and hybrid infrastructure as levers to accelerate artificial intelligence
The exponential growth of AI applications open doors to countless opportunities, but it also presents a critical challenge: balancing the power of data-driven insights with the fundamental right to data privacy. Users increasingly prioritize control over their information, while regulations like GDPR and CCPA demand rigorous data protection measures. This complex intersection creates a need for innovative approaches that reconcile user preferences, regulatory compliance, and the need for efficient AI development. Federated machine learning, differential privacy, edge computing and hybrid infrastructure help us navigate these complexities.
Navigating data fragmentation: Challenges and strategies in world of borders
Data has become a valuable asset that drives innovation, business growth, and global collaboration. However, a recent trend of data localization regulations and strengthened data protection laws is disrupting the seamless flow of data across borders, challenging traditional cloud strategies and creating a new reality of data fragmentation.