I was up early again today, but rather than jumping for another doom and gloom blog post I figured it was time to start to get to business for some of the things I see myself doing for this blog. I mentioned in an update 2 days ago that I have a lot of ideas to help attract attention to the issue at hand around AI and how we go about trusting what we see online. Where I can show people with their own eyes that there is something to discuss when it comes to the future of internet and global discourse. We are pretty comfy for the most part, we can get into the streets, visit friends, have water flowing to our homes and still have many social safety nets required to help us out. I think this the best time as ever to have a talk about securing online personas while we are free to do so.
Today I have been working since 6AM (it's 5PM now) with some new hardware I've picked up. I picked up some used hardware to start down the local LLM path. Any basic computer will do, you don't always need something latest or greatest. There's a lot of e-waste out there and if you can keep something around longer to reuse it, then it's less resources we need to pull out of the ground for that item. We already mined, refined and manufactured the product. It sucks that most technology is obsolete after a few years. M$FT recently wants to enforce a hardware security feature called TPM 2.0 (more info about the hardware can be found from the horses mouth).
TPM 2.0 is a requirement to run consumer versions of Windows 11 (for some reason enterprise customers will not require it...). So hardware created before TPM 2.0 (also cheaper hardware) will not be something you can upgrade from Windows 10 to Windows 11. You'll need to buy a new computer with TPM 2.0 if you don't have it, as it's a requirement with Windows going forward as Microsoft reiterates “non-negotiable” TPM 2.0 requirement for Windows 11. It's unfortunate, but don't fret I've been using Ubuntu for a few months trying to get a feel for software development outside of Windows. It might not be easy peasy for everyone, but I have experience and can troubleshoot with you if you take the leap away from Windows. I don't enjoy what M$FT is doing to their OS as it relates to AI, they're sticking AI even in Notepad and Paint! There are also things like Recall that make me shudder; corporate sponsored malware is now coming to a PC near you, and you'll be paying for that privilege! So get red-pilled and take the plunge into the Free and Open Source Software (FOSS) universe with Ubuntu.
So local AI, amirite? Distributed AI as a SaaS uses a bunch of water to cool the computers running the data centers. From what I found, 3x 10-15 word questions uses about 500mL of fresh water for an AI to process. It's because they're run in big data centers that require water cooling with no impurities to keep the silicon's heat down. This 5 min 30 second Youtube clip is pretty succinct at looking at the amount of water used for AI data centers. In the video he talks about water usage being a quarter of the needs of a small town, Dalles, Orego which is of a similar size to the town I live in. They have 3 data centers in a 16,000 population town so I'm sure the jobs and local economy are flourishing for such a thing but not likely... actually looks like 200 jobs were added to the area, of 10,640 jobs, not a huge number for 3 buildings using the same amount of water as 4000 people... It's a pretty shitty amount of water being pumped out of the ground.
So yea, how is this not doom and gloom? Well there are local models of AI you can run on your computer with a bunch of FREE applications online, or if you prefer the SaaS version you can pay money. I was playing with these applications today and my computer needed exactly 0mL of water to operate. I'll break it down what I was doing here, I'll give examples of what each potential setup can accomplish. I won't provide a tutorial myself, but there will be links to docs and videos if you want to give it a go yourself.
Isaac's Saturday of AI Frustrations
I know I mention a lot of chicken little rhetoric, and despite this I still actually do struggle with the tooling for AI. The thing I know for a fact, old internet, old phones, old anything is harder to use than as when it's aged a bit. And there is a loooot of money and interest in AI right now, so we're just the Wright Bros flying a mere 37 meters in today's AI landscape. I'm just using normal consumer hardware. For you hardware nerds, this is what I'm using:
- Microsoft Surface 8 Pro
- 16 GB RAM
- i7 1185G7 @ 3.00 GHz
- 256GB SSD
- Thunderbolt 4
- Sonnet Breakaway 750W eGPU box (upgraded to mini-atx 850W EVGA PSU)
- EVGA RTX 3090 FTW3 Ultra
- 24GB RAM
- 10,496 CUDA cores
So the long story short, I found out 16GB RAM without pagefile is kinda not enough. 256GB SSD is too small with full Windows 11 and all the software. Some of the LLM models are 10-24GB in size. The main issues I kept bumping into was out of memory exceptions or running out of disk space. Everything else seemed to go smoothly. But when I ran out of RAM I had to reboot, and when I ran out of disk space I had to delete a few models and cleaned things out. I did move some software to a USB flash drive and going to try that tomorrow.
So I hit a few roadblocks but now I understand the lay of the lands a bit more. I know the limits of my hardware, and will start again tomorrow with this bit of extra knowledge. It was me gaining this knowledge first hand that allows me to continue on my journey with local AI. But I think our stories are all intertwined, there was a lot of human history before I started my journey and there will be lots of the future in front of me too. These people put out this tech for me to find it and have some fun. But I want to use my time here to make a beacon of knowledge into this new and interesting tech. To help try and play a bit of a role in shaping how it's seen through public perspective. Playing and sharing the knowledge to start your own AI can definitely help against the threat of AI as your personal information will stay locally on your PC.
Llama GPT with some nice Software
I've already written a lot in this post and many people might not be too receptive with having to lookup every few words. I want this to be accessible, so if you have questions I'm sure you're not alone! Ask here or reach out to the communities linked to the software to maybe get help. Some of these are dead simple to do, especially with a YT video.I tried 4 different software for AI with different objectives and AI models with each one. All of these software requires Llama GPT of which you can install ollama. Ollama can run the model in a terminal, but the software listed below add on nice UI to make your experience with AI even better.
But without further ado, these are what I was looking at today for the better part of 13 hours:
bolt.new-any-llm - Use as a pair programmer to manage tasks and help start coding. I did get a working TTS script (inspired by the one I made for here) but took a bunch of prompts. Note: requires a special developer version of Chrome to work.
- Model:
- Qwen/Qwen2.5-Coder-32B-Instruct - made by Alibaba to write software
- Video tutorial - Qwen Just Casually Started the Local AI Revolution
- Model:
stable-diffusion-webui - Generates images from text, generates images from other images among other image related features
- Model:
- stabilityai/stable-diffusion-xl-base-1.0 - made by Stability AI
- Software:
- Video tutorial - host all your AI locally
- Setup
- Model:
ComfyUI - another stable-diffusion software (similar to #2) I haven't attempted this yet, and I don't have a YT video tutorial but I wanted to use a new version of stable-diffusion that wasn't loading in the webui listed in #2.
- Model:
- Software
- Usage: similar to 2 just improved a bunch in theory if I can get it to work. I just ran out of time today and wanted to make this post.
LM Studio - Desktop app with models you can pick from the UI. Very user friendly
- Didn't really do much with this as I wanted to try and load stable-diffusion into it. That's mainly what I was trying to do, just get the new version of stable-diffusion because the art sucked from the first model (see below).
Well that's about it. This post took me too long to cobble things together, probably a bit longer than the AI post because I had to actually do the research and try things out. Ohwell, it was fun and I ran into a bunch of dead end paths. So next time I go down this journey, hopefully tomorrow, I can just avoid those dead ends. Maybe tomorrow I'll show you my first image generated by stable-diffusion 3.5 medium or large model. Sorry if your made it through this far and don't know what you're reading. But go back and look at some of the links to see if you can piece them together.
Isaac
PS - I know this stuff is in disarray, I'll organize it tomorrow. I'm heading to a Christmas party tonight so I need to get my hair did. I'll proof-read and reorganize the content tomorrow.
I revamped the last part with the list of the AI software I was playing with. I hopefully made it more fluid and makes more sense 🤷♂️