I dont think the article says anything like that. Folks are concerned that it took less to _develop_ but you must remember: it is built off Llama.

I’ll be surprised if it needs less compute to run, given that the base model is 650 GB big. Should know; been tinkering with it.

Comments