Meet Infuzu: A Conversation with Founder Yidi Sprei

Meet Infuzu: A Conversation with Founder Yidi Sprei

Infuzu’s founder, Yidi Sprei, has been working with AI for a long time. Our product is born of his mind and experience in the field.

As our story spreads further and the Infuzu community grows, we thought you might want to know more about the man behind the curtain. Fresh out of college with a voracious work ethic, Yidi is plugging away daily at Infuzu’s product, but say with us to discuss his vision of AI.

What were the early phases of Infuzu like?

I started Infuzu in June of 2023. I had done a lot of freelance work, building custom software for large businesses. Quite a few asked for custom chatbots, so Infuzu was originally built as a no-code AI dev platform. That’s not what it’s become, but that’s where it started.

Everyone was kind of looking for the same base product, so Infuzu made intuitive sense: there was no reason to have the multitude of subscriptions that companies were buying into.

Has AI always interested you?

Always. I’ve been working in the space for more than 5 years and tech in general is interesting to me. Neural networks especially were interesting to me in the beginning; I’ve played around building my own but haven’t tried building LLMs..

Do you use Infuzu personally?

All day, every day. I probably generate 50 new conversations a day on a variety of channels. Most recently, GPT 4o has been a lot of my use, but Claude 3.5 is going to overtake that.

Anthropic’s approach to AI is actually incredibly interesting. They built safety into the platform from the start, which other companies don’t really do. So to be outperforming other algorithms while staying safe and spending less money is very impressive.

GPT still outperforms in code and API interaction right now.

But yes: I use Infuzu every day.

What do you think AI will look like in 5 years?

I expect players we haven’t even heard of to take over and everything I just said about ChatGPT and Anthropic will be nonsense of the past.

Open source work has been lagging behind, but they’re not lagging behind free models anymore. Mistral is an open source AI startup that has been releasing models out of France. Originally, they released new builds as torrent links on Twitter!

Is AI the future of search?

Probably not. Search was a fairly solved issue. And the use cases that we’ve seen from Meta, for example, suggest that Meta is indexing all of the conversations we’re having via WhatsApp and so on. That’s the price of convenience right now.

I do think the Google AI overview is the future. That summarizing. But Google has always used some form of AI. GPT became popular in early 2023 but it wasn’t actually a huge leap from where they were before. They’ve been using it for a long time to index pages and things like that. We moved away from simple keyword search a long, long time ago.

What about other AI products?

The biggest use case for AI will always be increasing productivity. Take search again, as an example: you have a question, AI should be able to answer it.

LLMs aren’t going to become much better conversationalists, they’re going to become better assistants. Improved accuracy and decreased cost. That’s what we’re seeing with Lllama 3, for example. They’ve reduced resources significantly.

AI is ultimately the perfect thing to bridge the gap we’ve always had with technology. There’s a digital world and a human world, this is a bridge between them. Compare to touchscreen. Touchscreen broke down a language barrier; AI does that even more effectively.

Can AI write an email for me?

AI is your assistant on its first day. It can do the bulk of the work, but you’re always going to have to double check it.

Something I think people need clarity on is that these LLMs, once you’re using them, aren’t training anymore. So it will get better in updates, but it’s hard for it to improve in niche use cases.

If you had 30 seconds to get me using AI regularly, what would you do?

Start off by using it; talk to it, ask it to write you silly poems.

You generate content; AI can help with that. It’s simply a matter of implementing it into the workflow. If you use it to take notes, for example, AI can write your grocery list. But you shouldn’t ask it to come up with one, because who knows what it might invent.

The simple rule of AI use is to use it when it’s more efficient than if you did the work yourself.

Look at math, for example. AI can learn to solve your problems for you, but these networks are fundamentally different from other kinds of computing. A calculator will be more efficient.

Chess is another great example. Traditional computing software was capable of beating Kasparov in the 90s using brute force computing: Deep Blue. Then Google had Alpha Zero, which was never public, but used what we now call AI for the same task. It didn’t have to brute force. And now that’s implemented in Deep Blue. That’s what AI is good at. AI is bad at Connect Four. It’s too solved, it’s too traditional.

AI is amazing with language because of the amount of choice that goes into it. That’s why we want to use these models. You can choose so many possibilities for everything that traditional computing could never handle it. If I ask you, when was Obama elected? A traditional computer will tell me 2008 because that’s the literal answer to the question, but it might not be what I’m actually looking for. I might want more info about the circumstances, or I might have wanted an exact day.

When you have that amount of subjectivity, AI can help out.

Any parting words?

You should always be looking for better ways to do things; AI is a better way for many tasks. Use it when it makes sense.

More Posts

Send Us A Message