Complete library
The complete collection of all our insights, offering a comprehensive range of articles spanning foundational concepts to advanced strategies, providing guidance and inspiration for every stage of the algorithmic business journey.
The Nobel Prize in Physics 2024: Neural networks inspired by physical systems
The 2024 Nobel Prize in Physics highlights groundbreaking work done by John J. Hopfield and Geoffrey E. Hinton on neural networks, where they developed models like the Hopfield Network and the Boltzmann Machine, inspired by the behavior of physical systems. Their pioneering work in the 1980s laid the foundation for the machine learning revolution that took off around 2010. This award celebrates their contributions to the foundational technologies driving modern machine learning and artificial intelligence. The exponential growth in available data and computing power enabled the development of today’s artificial neural networks, often deep, multi-layered structures trained using deep learning methods. In this article we will dive into their discoveries and explain how these breakthroughs have become central in AI applications.
CTO update: The DSPY framework to automate and control LLM behavoir
In this update, Jonathan Anderson (our CTO) explains the new DSPY framework, designed to simplify and strengthen control over large language models (LLMs). LLMs, while transformative, can be unpredictable, often behaving like “black boxes.” DSPY addresses this by offering a structured approach to interaction, reducing the need for prompt tuning and making model behavior consistent and predictable.
The quantum advantage: How quantum computing will transform machine learning
Machine learning (ML) is currently transforming various fields, such as healthcare, finance, and creative industries. However, as data and problems become more complex, classical computing struggles to scale ML algorithms efficiently. Key challenges include the time and computational resources needed to train models on large datasets, optimize deep learning architectures, and perform tasks like data classification and clustering. These limitations drive interest in exploring quantum computing.
Revolutionizing data analysis with Graph Neural Networks
Graph neural networks (GNNs) offer transformative potential for businesses by uncovering hidden patterns and relationships within complex data. From detecting fraud to optimizing supply chains and accelerating drug discovery, GNNs enable smarter decision-making and drive operational efficiency. Unlike traditional machine learning models that analyze data points in isolation, GNNs excel at identifying connections and patterns within the data. For business leaders, this technology presents an opportunity to unlock new avenues for growth and innovation, maximizing the potential of their data.
AI as a tool to offset electrical power scarcity
Sweden's major population centers, including Gothenburg, Stockholm, and Malmö, are faces with a looming threat of power shortages due to capacity constraints in the national grid. Property owners, transportation sectors, and heavy industries will face challenges to drive their business. AI is part of the toolbox to solve this - but getting started is key.
Building the algorithmic business: Machine learning and optimization in decision support systems
The ability to leverage the combined strengths of machine learning and optimization to enhance decision-making processes can significantly transform business operations. By integrating these technologies, businesses can achieve increased efficiency, reduce operational costs, and improve overall outcomes. This transformative potential is realized through practical applications in decision-making, whether by supporting human decisions or performing them autonomously.
Using AI to analyze brain research data
Mats Andersson, a PhD student at Sahlgrenska Academy's neuroscience department, is researching how synapses in the brain work. This research is important for understanding conditions where synaptic turnover is affected, such as autism, schizophrenia, and depression, as well as neurodegenerative diseases like Alzheimer's and Parkinson's. Using cutting-edge tools and collaborating with other scientists, this research aims to make a real difference in understanding and eventually treating or managing these conditions.
CTO Update: Training LLMs on ROCm platform
At Algorithma, we're constantly pushing the boundaries of Large Language Models (LLMs). In this CTO update, Jonathan explores the exciting potential of AMD's ROCm software platform and the next-gen MI300x accelerators for powering these models.
Why naive models are still relevant in the age of complex AI
AI is often seen as black-box complexity, but what if the answer to your problem lies not in sophisticated algorithms, but in simpler approaches? At Algorithma, we champion the power of naive models. Often overlooked due to their basic nature, they offer a surprising set of advantages that can be incredibly valuable for businesses of all sizes.
Large language models: Power, potential, and the sustainability challenge
Large language models (LLMs) have revolutionized how we interact with machines, enabling tasks such as text generation, translation, and question answering. However, these features come at a cost, as LLMs require high amounts of computational power both for training and inference. Transformer models, which LLMs are built on, have simultaneously increased in size since their inception and the trend seems to continue due to the clear performance benefit. With widespread adoption of LLMs thus comes concerns about environmental impact, contradictory to most companies’ sustainability agendas to reach the SBTi targets.
Federated machine learning and hybrid infrastructure as levers to accelerate artificial intelligence
The exponential growth of AI applications open doors to countless opportunities, but it also presents a critical challenge: balancing the power of data-driven insights with the fundamental right to data privacy. Users increasingly prioritize control over their information, while regulations like GDPR and CCPA demand rigorous data protection measures. This complex intersection creates a need for innovative approaches that reconcile user preferences, regulatory compliance, and the need for efficient AI development. Federated machine learning, differential privacy, edge computing and hybrid infrastructure help us navigate these complexities.