专为AI而来的新语言Mojo,推理LLaMA2性能比Python提升250倍!
比C语言也要快上20%。
上周五才开放下载,Mojo这么快就自证实力了。
要知道,之前官方号称Mojo可以比Python快6.8万倍。
而且作者表示,其实还有进一步提升的空间。
OpenAI创始成员Karpathy已经赶来围观了。
目前,LLaMA.mojo已开放下载~
带来这个版本的老哥是一位前Meta工程师Aydyn Tairov。
他利用Mojo的SIMD(Single Instruction Multiple Data,单指令多数据)和向量化原语,将llama2.py转化为Mojo,性能较Python版本提升了近250倍。
即便在快速运行模式下,Mojo版本也比C语言版本性能提升15-20%。
不过作者尝试了在Mojo中使用并行模式,速度就慢了很多。
作者进行性能比较的系统和硬件情况如下:
如果你也想下载运行这个模型,需要先在环境中安装配置Mojo(文档链接见文末)。
首先将存储库保存到保存项目时的文件夹:
git clone https://github.com/tairov/llama2.mojo.git
然后打开存储文件夹:
cd llama2.mojo
下面就可以下载模型:
wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin
然后即可运行:
mojo llama2.mojonum hardware threads: 6 SIMD vector width: 8checkpoint size: 60816028Once upon a time, there was a little girl named Lily. She loved to play outside in the sunshine. One day, she saw a big, red ball in the sky. It was the sun! She thought it was so pretty.Lily wanted to play with the ball, but it was too high up in the sky. She tried to jump and reach it, but she couldn"t. Then, she had an idea. She would use a stick to knock the ball down.Lily found a stick and tried to hit the ball. But the stick was too short. She tried again and again, but she couldn"t reach it. She felt sad.Suddenly, a kind man came by and saw Lily. He asked her what was wrong. Lily told him about the ball. The man smiled and said, "I have a useful idea!" He took out a long stick and used it to knock the ball down. Lily was so happy! She thanked the man and they played together in the sunshine.Once upon a time, there was a little girl named Lily. She loved to play outside in the sunshine. One day, she saw a big, redachieved tok/s: 264.24870466321244
为啥Mojo这么快?
不过话说回来,为啥Mojo的速度可以这么快?
这还得从Mojo的来历说起。
它诞生于今年5月,专为AI领域开发,由LLVM之父和Swift之父Chris Lattner带来。
它兼顾了Python和C++的优点,语法简单、运行快,而且可以和任何Python库无缝交互。
自从上线以来,Mojo已经吸引了12万开发者,GitHub星标达9K。
今年8月,Mojo背后公司Modular新获1亿美元融资,总融资金额达1.3亿美元。
Mojo语言这么快的原因,可以归结为4点。
第1步,通过类型注释消除Python动态类型的损失,并做代数简化(algebraic simplifications),避免开方运算以及简化复数平方运算,达到89倍加速。
第2步,通过向量化实现SIMD(单指令多数据)的并行计算,并让向量宽度以匹配CPU的FMA(浮点乘法累加单元)数量,达到874倍。
第3步,把前两步开发好的单线程实现改成多核并行化,对于88核的系统再获得30倍加速,与原始Python相比已经到了26000倍。
第4步,解决并行化中的加载不均衡问题,让线程从池中动态获取任务,得到最终结果68000倍。
几天前,Mojo正式开放下载。目前支持Linux系统,后续将陆续添加Mac和Windows。
同时支持VSCode插件,可以实现语法高亮和代码补全等功能。
以及也能像Python一样在Jupyter里交互式操作。
感兴趣的童鞋,可以去上手体验一下~
GitHub地址:
https://github.com/tairov/llama2.mojo
Mojo文档:
https://docs.modular.com/mojo/manual/get-started/index.html