Liquid AI, a Massachusetts-based synthetic intelligence (AI) startup, introduced its first generative AI fashions not constructed on the prevailing transformer structure. Dubbed Liquid Basis Mannequin (LFM), the brand new structure strikes away from Generative Pre-trained Transformers (GPTs) which is the inspiration for widespread AI fashions such because the GPT collection by OpenAI, Gemini, Copilot, and extra. The startup claims that the brand new AI fashions had been constructed from first rules and so they outperform massive language fashions (LLMs) within the comparable dimension bracket.
Liquid AI’s New Liquid Basis Fashions
The startup was co-founded by researchers on the Massachusetts Institute of Know-how (MIT)’s Laptop Science and Synthetic Intelligence Laboratory (CSAIL) in 2023 and aimed to construct newer structure for AI fashions that may carry out at the same degree or surpass the GPTs.
These new LFMs are out there in three parameter sizes of 1.3B, 3.1B, and 40.3B. The latter is a Combination of Specialists (MoE) mannequin, which suggests it’s made up of varied smaller language fashions and is aimed toward tackling extra advanced duties. The LFMs at the moment are out there on the corporate’s Liquid Playground, Lambda for Chat UI and API, and Perplexity Labs and can quickly be added to Cerebras Inference. Additional, the AI fashions are being optimised for Nvidia, AMD, Qualcomm, Cerebras, and Apple {hardware}, the corporate acknowledged.
LFMs additionally differ considerably from the GPT know-how. The corporate highlighted that these fashions had been constructed from first rules. The primary rules ar basically a problem-solving method the place a fancy know-how is damaged all the way down to its fundamentals after which constructed up from there.
Based on the startup, these new AI fashions are constructed on one thing known as computational items. Put merely, this can be a redesign of the token system, and as an alternative, the corporate makes use of the time period Liquid system. These comprise condensed data with a give attention to maximising data capability and reasoning. The startup claims this new design helps cut back reminiscence prices throughout inference, and will increase efficiency output throughout video, audio, textual content, time collection, and alerts.
The corporate additional claims that the benefit of the Liquid-based AI fashions is that its structure could be robotically optimised for a particular platform based mostly on their necessities and inference cache dimension.
Whereas the clams made by the startup are tall, their efficiency and effectivity can solely be gauged as builders and enterprises start utilizing them for his or her AI workflows. The startup didn’t reveal its supply of datasets, or any security measures added to the AI fashions.