Stability AI, the creator of the renowned image-generation software Stable Diffusion, has unveiled a group of open source language-model tools, contributing to the expansion of the big language model (LLM) industry. This recent addition offers a viable alternative to OpenAI’s ChatGPT, which can profit an industry that’s becoming anxious about OpenAI and it’s principal investor Microsoft becoming too monopolistic.
The alpha versions of the StableLM suite, featuring models with 3 billion and seven billion parameters, at the moment are accessible to the general public. Models with 15 billion, 30 billion, and 65 billion parameters are currently being developed, while a 175 billion-parameter model is planned for the long run.
Comparatively, OpenAI’s GPT-4 boasts an estimated 1 trillion parameters, which is six times greater than GPT-3. Despite this, Stability AI emphasized that parameter count may not be an accurate measure of LLM effectiveness.
The robustness of the StableLM models stays to be seen. The Stability AI team has pledged to reveal more information concerning the LLMs’ capabilities on their GitHub page, including model definitions and training parameters. The emergence of a strong, open-source alternative to OpenAI’s ChatGPT is welcomed by most industry insiders.
Sophisticated and advanced third-party tool access, equivalent to BabyAGI and AutoGPT, as recently reported are integrating recursion into AI applications, meaning they’ll create and modify their very own prompts for recursive instances based on newly acquired information.
Incorporating open-source models into the combo may benefit industry users preferring or may not have the ability to pay OpenAI’s access fees. Interested individuals can test a live interface for the HuggingFace-hosted 7 billion parameter StableLM model.
It stays to be seen what company steps to the plate next to supply similar LLM models.