Humanizing Artificial Intelligence

Small Language Models

Small Language Models are trained on less data and have less parameters - capability to “learn”.

This makes them smaller in size - which means they take up less space. They require less computing power to run.

These smaller models are ideal for use in mobile apps or embedded use where speed and size are important.

They’re not only smaller in size or capabilities. Most often, they have a smaller focus. For example, a law firm might develop a small language model that focuses on legal content.

Companies can also build Small Language Models for specific areas of their business. The hyper-focused models produce better results. That’s because they don’t have the irrelevant training data like large models do.