Inma Martinez – Artificial Intelligence 2025

08. January 2025 – Oliver Stoldt

Inma Martinez – Artificial Intelligence 2025: What Is Real, What Is Hype, and How Can Businesses Create Competitive Assets with Advanced AI (Deep Learning, GenAI, Large and Small Foundation Models)

In 2025 artificial intelligence is no longer just a world of mathematical algorithms creating machine learning rules within computerised systems, but a rather extraordinary landscape of advanced cognition systems that have hit the market within 18 months of each other to become what governments and regulators consider today as Advanced AI systems. Although these AI systems had been “in the making” for decades, producing steady progress, explosive transformation of the kind we see today is shaking society and business at the core. Not just because we have realised that these AI systems perform at unimaginable levels of efficiency, but because they are forcing as to look at human cognition and skills and compare these to this groundbreaking performance, specially when AI agents not only learn through trial and error, but are mastering self-learning, thus paving the way for General Purpose AI, the big AI black box capable of outsmarting us. Perhaps, what has captured humanity’s imagination as well as ignited our shared growing concerns about the indomitable power of AI, has been to witness the evolution of Generative AI since 2022.

In a marketplace flooded with advanced AI as of January 2025, we find ourselves at the crossroads of not only asking ourselves what is real and what is hype, but primordially how small and medium enterprises, not just the mighty and ultra-capitalised multinationals, can truly benefit from the promise of AI as a competitive asset. We have seen how clearly AI can help businesses achieve unprecedented levels of operational efficiency, but share prices will rise or plummet when AI-driven product innovation allows companies to reach increased enterprise value. Timing will be everything in the next three years in order to position sectors for the next ground-breaking jump into AI hyperspace around 2030.

There are two tectonic forces pressuring businesses to adopt advanced AI: bringing onboard Generative AI into the business and putting it through the test of being fit for purpose (or not yet), and investing in applications derived from Foundation Models.

Here are three simple recommendations if you are keen on bringing GenAI into your company in 2025:

  1. Generative AI is not Intelligent but its AI Agents can pretend to know what they are doing

Just when some companies were feeling comfortable and encouraged enough to give Co-Pilots a shot, Generative AI agents emerge as the next level of autonomy and purpose-led automatization in Generative systems. AI agents are designed to achieve specific goals. They can understand, plan, and execute actions to reach those objectives with an autonomous degree of independence, making decisions and taking actions without constant human intervention. Because their DNA is “generative”, that is, to produce an output or achieve a goal “at all costs”, their weakness point (and our challenge) is how to restrict their desire to be overly interactive – not with users, which would be a great thing, but with external data sources that they could procure/access without anyone knowing, or their inertia to make decisions without consultation.

Your challenge as a business owner testing AI agents should be how to create internal hierarchies of “restrictions”, how “free” you will allow these agents to be and how restrained you want them to operate until your safety protocols are aligned.

  1. Hyper-personalised customer experiences with AI-powered chat bots still do not understand the nuances of why people call customer Services

Generative AI is extraordinary at pretending to have friendly human conversations with customers, but its perception capabilities decrease when people need resolution of human emotions such as frustration with a product, or anxiety when realising that they are victims of cybercrime and their accounts have been hacked. Typically, the most common calls to customer service are to do with things people cannot resolve on a website, specific situations that do not particularly follow a decision-tree of options, and increasingly, having difficulty expressing what they need help with, so the prompts – the human interactions and questions to the bot, tend to be difficult to interpret by an AI system (which fundamentally is a language-based system and has not been trained in human psychology).

If your company is willing to test AI-powered chat bots, invest in a team of psychologists that will work alongside the GenAI trainers, so that human poorly formed prompts can be entered into a taxonomy proprietary to your company, its culture and the kind of empathetic treatment that you wish to convey to your customers. It is doable, but it requires the right team and the right human factor skills in the trainers.

  1. Automated content creation versus enhanced creative Workflows

Generative AI can create various forms of content, including product descriptions, marketing copy, articles, and even code, saving time and resources but it lacks finesse, quality, and in many cases it easily shows that it has been created by a soulless machine. Worse, it can end up with content that has been fabricated by the GenAI system, infringing image rights or copyrights because it simply does not know any better. A better option is to invest and deploy AI tools that can assist creative teams by generating ideas, creating variations of existing designs, and automating repetitive tasks. GenAI is still an output system in training and therefore needs a master, someone who will know if the output is what we all want, or if it is an aberration.

Your mission in 2025 should be to make your workforce AI-savvy, that is, familiar in the use of as many GenAI systems as possible, and becoming more confident in detecting and pinpointing GenAI’s shortcomings, so that you can build proprietary systems with your own content for the training.

Aside from GenAI, in 2025 the AI marketplace will present quite extraordinary competitive forces amongst the big AI companies and the upcoming challengers. Here are some of the most exciting developments that you should follow:

Large Foundation Models versus Small Foundation Models

Leading the charge, Google, OpenAI, and Meta are actively developing LFMs to gain superiority in future advanced AI applications, allowing Microsoft, NVIDIA, Salesforce, Apple and Amazon to also enter the competitive landscape via their investments in them and/or their rapid integration of LFMs into their product ranges. Emerging players like AWS, Anthropic, Stability AI (open-source), Mistral AI (open-weight LFMs) are also emerging as challengers. But the most exciting new entry into this “costly” and “energy draining” landscape, is the entry in 2025 of Small Foundation Model like IBM’s Granite 3.0 and challengers such as LiquidAI (an MIT spin-off), HuggingFace, AI21Labs and Cohere just to mention the top of the pyramid.

The challenges faced by the LFMs fundamentally go against the targets that governments and businesses are aiming for: energy efficiency, sustainability, and the one which every business fears: sending proprietary data to the LFM provider for training when specialised tasks and their customization so require it. The Small Foundation Models can develop APIs to work with on-premise data, so no data leaves the house. And because they are smaller in data processing, they are more sustainable and energy efficient. Furthermore, infinitely more affordable. The typical client of a Small Foundation Model provider is thus…. as small or medium enterprise, which typically represent 80%-90% of the industrial landscape of developed nations. Watch this space in 2025.

Advanced AI Data Governance is the one realm where business will be able to “control” that AI performs safely

Data governance typically has evolved from regulatory frameworks watching out for copyright infringement and the protection of private records. In advanced AI systems, the main aim is to guarantee that the outputs and agent decisions made by interpretating the data are correct, safe and trustworthy.

If in 2025 you want to prepare your company to have agency over its data, especially if you are thinking of adopting advanced AI applications, you need to assure that all data that you gather and bring from external sources has a clear provenance in order to assure its quality; in addition to data lineage, you must verify its integrity, that is, that the data has not been tampered with, it is complete and wholesome, and that it has been gathered using trustworthy tools; ultimately, these two steps will guarantee that you are certain as to the veracity of the data, because AI systems cannot tell what is true and what is false, so they always assume data fed onto them for training as “true”, and here is why the outputs “hallucinate” or simply overfit (render incorrect outputs).

All of the above recommendations are a safety net for AI assurance, and a first step to ensure that in 2025 your company has a solid ground to leverage from AI’s potentiality. It is going to be an exciting year ahead.

This guest article was written by Inma Martinez. As a government advisor, Inma Martinez is currently Chair of the Multi-Stakeholder Expert Group and Co-Chair of the Steering Committee of the GPAI, the G7/OECD global agency for AI development and cooperation.

Book Inma Martinez for speeches and keynotes with The Premium Speakers Bureau.

Inma Martinez

Digital Pioneer in Artificial Intelligence & Future Technologies