About AI Agents

·

It was 3:00 AM, and I was lying in bed, thinking about daily matters. My thoughts were scattered and restless; I felt the need to be productive after an inactive day. As I reflected on my career and past projects, my mind turned to AI—its possibilities, applications, and how it could become an extension of ourselves in daily life. Suddenly, an intriguing idea struck me: I could create an automation system with AI agents to publish content on my rarely-used blog, transforming it into something more like a newsletter.

I jumped out of bed and started up my computer, connecting to my local AI server (that’s another story). Having experimented with various agent frameworks—Autogen, n8n, CrewAI—I began by discussing my vision with ChatGPT-4. The workflow was straightforward: agents would gather news from different sources, summarize headlines, create a blog post, and publish it. ChatGPT (which I named Nova) offered practical suggestions and outlined a clear plan.

Though I needed to use Python for development and had no prior experience with it, I wasn’t deterred. Since I understood coding principles, I simply opened VS Code and enabled GitHub Copilot Agent mode. I started with the ChatGPT model in agent mode, which created the project structure, and I chose CrewAI to model my agents and tasks. Within 15 minutes, my blog-poster app was running.

The first attempt failed to publish to my blog, so I paused to have Copilot create a tool for server communication. After some quick research on the WordPress API, everything fell into place. The agent wrote and optimized the publishing tool, and soon I witnessed my first AI-written blog post go live. By then it was 5 AM—time for sleep.

The next day’s attempt to publish another post failed despite using the same code—a reminder that AI outputs vary unless explicitly constrained. I spent the day fine-tuning the AI with Copilot, refactoring the code for consistency. Copilot resolved the consistency issues within an hour, but this highlighted an important lesson: AI’s inherent variability, even with identical inputs, leads to tuning challenges. And with tuning comes another consideration: cost.

My setup includes an older computer with an Intel i71000KF, an RTX 2080, and 32 GB of RAM. I run Ollama for local AI hosting, with my tool using a local LLM. I chose Mistral 7B for this project, as it outperformed Llama 3.2. While I attempted to use Mixtral with its 78B parameters, my hardware limitations made it unreliable.

For production environments, building an agent-based infrastructure requires robust servers capable of handling traffic, generating accurate responses, and completing tasks reliably. The text-based instructions for each agent add to the operational costs. Companies must carefully evaluate what to build, set realistic expectations, and calculate potential costs.

Ultimately, using tools like Copilot, I quickly built a working application in an unfamiliar programming language. The experience felt like collaborating with a colleague, demonstrating the remarkable progress of AI tools in recent times.

If you wanna check out here is the projects link: https://github.com/ekinbulut/crewai-python-publisher