Here is what organisations need to consider for making the right choice as they accelerate their AI journey.
How do total costs (implementation and operational) compare between building our own system versus buying an existing one?
Generative AI has proven the point once and for all that some AI models should not be built by all organisations. Take the example of ChatGPT which cost $10 million to train in its current form – that investment would never generate a Return on Investment (ROI) for the vast majority of organisations and would be wholly impossible to fund for even more organisations. Hence, it would be prudent for organisations to opt for a pre-built model, e.g., open-source or as-a-service, to get access to this capability.
For many other general AI use cases though that do not require as significant an investment as ChatGPT, the question remains what would yield the highest ROI – is it building AI or buying it. Understanding the cost comparison requires forecasting and business case modelling which provides insights to what the costs would be for both building or buying the model and operationally running it.
Does our organisation have the data, capability and time to build an AI system which is better than what we can buy?
We may be able to deliver an AI model internally at a comparable cost to what we can get in the market. However, if we do not have access to capabilities and datasets to create competitive models, we may lose out to competition who access more cutting-edge AI tools at the same price point in the market. This is incredibly important to keep in mind when deciding whether it is worth investing in building own models.
How do risks compare between building versus buying when considering emerging regulations?
New regulations are continuously being proposed for AI across the EU, the most recent and prominent one being the agreement of the AI Act. This introduces regulations for both AI system owners and AI system providers. The requirements include providing transparency of models, analysis of and eradication of biases in datasets used for training the model, etc. For certain use cases, organisations may evaluate upcoming requirements as a risk due to the financial penalties involved. But purchasing access to models rather than building them internally may remove key liabilities providing transparency which eliminates or lowers the risk for these penalties.
Does building or buying fit better with our current operating model (or lack thereof) for AI?
AI models and tools require continuous maintenance regardless of whether they are built or bought. For organisations that already have well-established operational models for deployment, monitoring and maintenance of models such as MLOps, the as-a-service way of consuming AI models, may give rise to the need for new processes and methods for deployment, monitoring and maintenance of AI systems. This needs to be considered as a potential addition of complexity.
This is not only relevant where existing AI centres of excellences take ownership of as-a-service models, but also in cases where either IT or business teams take ownership of purchased out-of-the-box AI tools.
What are the data privacy challenges related to buying versus building?
AI use may sometimes require sensitive data either related to the business or your customers. While evaluating the sensitivity of the data and the risks related to sending this to an AI system provider outside of your own organisation, it is also important to evaluate the complexity of the legal remediations that need to be put in place.
As with any non-AI solution, there may be requirements for Data Processor Agreements (DPA) under the General Data Protection Regulation (GDPR) as well as emerging requirements under the EU AI Act that could cause additional legal work before the AI system can be put into operation.
What risks could our organisation face if purchase of an AI system leads to a vendor lock-in?
Organisations have no control over the performance of products provided by vendors or the future state of that vendors business. This, therefore, can be a potential risk to an organisation that is operationally reliant on the AI systems provided by an AI system provider. Should there be challenges related to quality, cost or availability of a current provider, it is important to ensure that there is either an alternative provider or that the functionality of the consumed AI model can be replicated internally.