Artificial intelligence (AI) researchers at Google Research and Google DeepMind have developed a method by which a large language model (LLM) can be augmented with other language models. 

This addresses one of the biggest outstanding problems with LLMs by allowing developers to imbue existing models with new abilities without having to start from scratch or engage in costly retraining/fine-tuning sessions.

According to the Google Research team, augmenting an LLM with another language both improves performance at existing tasks and enables new tasks that wouldn’t be achievable by the models by themselves.

Teaching old chatbots new tricks

The research was conducted using Google’s PaLM2-S LLM, a model the company says is comparable to GPT-4, the AI underpinning OpenAI’s ChatGPT.

PaLM2-S was benchmarked by itself in the team’s experiments and then again after being augmented with smaller, specialized language models. The tasks performed included translation, where the augmented version showed as high as a 13% improvement over baseline, and coding.

When tested in coding tasks, the hybrid model showed significant improvements, per the paper:

“Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks—on-par with fully fine-tuned counterparts.”

Potentially massive implications

On the surface, the demonstrated performance gains could have immediate implications for the AI sector. The increased performance in translation tasks, for example, was evidently greatest when translating language with low support into English. This remains an outstanding problem in machine learning, and Google’s work here has the potential to move the needle.

However, in the greater scheme, it’s possible that this vein of research could address the looming Sword of Damocles hanging over the heads of many tech CEOs in the AI sector: legal troubles that could dismantle the very foundation of chatbots such as ChatGPT.

Copyright vs. artificial intelligence

The makers of some of the most popular large language models have been named as defendants in numerous lawsuits hinging on allegations that these AI systems are trained on copyrighted data.

The question lawmakers and the courts will have to answer is whether a for-profit company can legally use this data to train their language models. In the extreme, were the courts to rule that developers cannot use such data and that any models trained on copyrighted material have to be purged, it may be technically impossible or financially infeasible to continue offering the affected services.

Essentially, because of the high costs involved in training large language models, and their dependence on massive troves of data, products such as ChatGPT, as they’re built today, might not be viable in a more-regulated U.S. AI landscape.

However, if Google’s new LLM augmentation scheme pans out with further development, it’s possible that many of the scaling requirements and costs of spinning up an LLM from scratch or retraining an existing one could be mitigated.

Related: Italy to tackle AI regulation as one of main priorities during G7 presidency