[0:00]Cloud Code has been getting a lot of attention lately, but it's not the only AI coding tool worth using. Open code is growing in popularity and for good reason. It's 100% open source. It supports multiple providers and models, including free models, and it gives you an incredibly powerful agentic coding workflow right in the terminal. In this video, I'm going to show you everything you need to know to get started with open code from the initial setup to learning advanced features like agent skills, MCP servers, sub agents, background agents, and real project workflows. But the end of this video, you'll know how to use open code as a serious alternative to Cloud code. Let's jump in. The first thing we need to do is to install open code. So you can just go to opencoe.ai. And from this page, simply go through the installation instructions, and they've kept the setup process extremely simple. You can simply copy this cwl command over here. Then you can simply open up your command prompt or terminal, paste in that command and press Enter. And it will only take a few seconds to install open code. Beautiful. Now that we've got open code installed, and in order to start open code, we can simply enter the command open code and press Enter. And that's it. Now we've got access to open code. Now, when using open code, I do recommend opening it up in a project folder. So you can just open up your terminal window, first navigate to that folder, and then start up open code. Or something I like to do as well, is to start open code from a code editor, like VS code or cursor. This is definitely optional, but the benefit of doing it this way is I can easily see any files that are being created by the agent. But if you just want to use the terminal window, that's perfectly fine as well. So within our project folder, let's start up open code. So the first thing we need to do is to connect open code to any provider that we're planning to use. So what we can do is enter front slash, enter connect, which will run this connect command. Then from this list of providers, we can search for a provider, or go through the massive list of providers that they do support out of the box. Now, this might be enabled by default, but I do recommend connecting to open code Zen. This is basically their own gateway for calling models, and this does give you access to free models by the way. That's why it's definitely worth it. So just look for open code Zen. Then, this is not going to cost you anything. We just have to hook up an API key. So I'm just going to open up this URL. Then from this dashboard, we can actually view our usage, we can set up any billing details. And just to prove to you that these models are free, my balance is actually zero at the minute. So let's go to API Keys. Then let's actually create a new key. I'll just call this one Open Code Masterclass. Let's create it. Then let's copy this key. And let's pass it to open code and press Enter. I am going to delete that keys, so please use your own. After we sign in to a provider, we can select from the available models for that provider. Now, as you can see, there are actually free models in this list, like Big Pickle, HY3, MiniMax, 2.5, and NemoTron 3 Super from NVIDIA. Now, I do want to mention that with free models, there's always a chance that they might be using your prompts to train these models. So if you just want to try this out for free or follow a tutorial, you can definitely use the free models. And I will show you how to hook this up to an actual paid service as well. For now, I'll select Big Pickle. And now we're back to this chat interface. Let's just say, hey. And we do get a response back, and it didn't cost us a thing. So at the moment, our project is very empty. Let's ask our agent to scaffold a new Next.js project. Please can you set up a new next.js project. Use the NPX command. And I'm actually just going to add to this as well, in the current working directory. There's always a chance that the agent might actually install Next.js into some kind of sub folder. Right, that's good enough. Let's send this. And as you can see, it's running the NPX command to install Next.js. And if we have a look at our file system, we do indeed see our Next.js project. Let's see if Open Code can also run commands. Please start the dev server. And indeed, it's actually starting the dev server on port 3000. And cool, our Next.js project is running. Right, so we can actually interrupt the agent by pressing Escape. If we press Escape again, we've just interrupted this agent. So what I also want to do is actually clear the conversation by entering front slash new. And now we've got a fresh session with a fresh context. If we ever want to resume a previous session, you can simply type the sessions command. And we can see that earlier today we started this conversation, and how if we wanted to, we can simply continue with this chat. But I do want to start with a new conversation. And for the rest of this video, I'm actually not going to use the free model. Instead, let's connect Open Code to a different provider. So I'll run the connect command again. Now, in order to connect Open Code to a more intelligent model, you can select a provider like Open AI or Anthropic or Gemini, whatever you want. Now, here is something you need to be careful about. You do have two different ways to pay for inference. You could pay for APIs, or you could use your existing subscriptions of one of these providers. As an example, if you want to use something like Anthropic or Open Router, you could provide your API key, and you will be built based on your token usage. So it's kind of a pay as you go option. But honestly, I think that's the most expensive option out there. I would not recommend using API keys, especially for coding tasks. Instead, use your subscriptions. Now, unfortunately, Anthropic is not supported yet. And if you are in the know, you know that Anthropic is banning pretty much anyone who's using their service and their subscriptions. And it's not limited to open code only, they're banning all sorts of tools like open claw as well. So to be on the safe side, I would recommend just hooking it up with Open AI or maybe even something like Kimi. But personally, I'll be using Open AI. From here, we can select our auth option, like chat GPT Pro or Plus. This is referring to our subscription. So if you do have a Pro or Plus subscription, choose that option. The third option was to provide your API key, which I just would not recommend. Then we can open up this URL in the browser. Let's sign in with our account. Let's continue. And done. Now we can select the model that we would like to use. Let's go with GPT 5.5. And by the way, if you ever wanted to change your model, simply enter the command front slash models. So let's click on that. And now we can select from any of the available models. We can also search for models, so we can just search for GPT 5.5. So I'll select the one from Open AI, and that's it. Now at the moment, we are using GPT 5.5, but with medium reasoning effort. If we ever wanted to change the reasoning effort, all we have to do is run the command front slash variants. Which gives us reasoning efforts specific to this provider. For GPT 5.5, I'll go with high reasoning. Now before we build our project, I do want to assign certain agent skills to this agent that will help you produce even better results. Now if you're new to skills, skills is just a really cool way that we can add additional capabilities to our agent. Skills are nothing more than very detailed instructions that the agent can follow to get very specific results. There are basically two skills that I would recommend installing into a project like this. Since we're building a web application, we want to give this agent the front end design skill. Now the easiest way that I found to explore and install skills is with this website, skills.sh. This is a repository of nearly 100,000 skills. So you can simply search for skills, but you can already see from the all-time favorites, that the front-end design skill is at the top of this list. So let's actually copy this command. Then back in our project, I'm just going to open up a new terminal window, and let's install that skill. Open Code is not in this list, but that's not a problem at all, because it uses the standard agent's folder for retrieving skills, and anything that we want to attach to it. So I'm actually not going to select anything in this list. Let's simply press Enter. We'll install this at project level. And let's proceed with the installation. And just to show you what this did, it created this agents folder. Which is a standard convention for the majority of coding agents out there. And within the agent's folder, we've got this skills sub folder, along with our front-end design skill. This contains a name for the skill, a description on when to use the skill, and just a very detailed prompt on building beautiful user interfaces. Since we are using Next.js, I do want to install one more skill, and that's the Next.js skill from Versel Labs. Let's copy this command. And I'll just run that in the terminal as well, just like how we did it with the front-end design skill. We don't have to select any of these tools. We'll install it at project level, and that's it. So now in the skills folder, we've got our front-end design skill and the Next.js skill, that actually contains a lot of reference documentation for using Next.js. Now our app will have AI functionality baked into it. So I'm actually going to be proactive and install the AI SDK from Versal. So again, just search for AI SDK, copy this command. And let's install this skill as well. So we can just verify that the agent has access to all of those skills by running the skills command. And here we can see our front-end design skill, the AI SDK, Next Best Practices, and Next.js. So some of these skills might actually be stored in my user folder, but for you, you should definitely have those three skills available. Just a side note, if you don't see any skills, what you need to do is simply exit out of open code by running exit. Then just reopen open code. And now you should see all of those skills. So I'll just switch my model back to GPT 5.5 high. And what I'm also going to do is just go to source control and create a commit. Added skills. Nice. And by the way, if you click on this Open Code logo, you get this effect. Another important component to understand is memory files. So in the root of our project, what we can do is create a new file called agents.md. Now, I already have that file available, but if you don't have it, you can create it yourself. And that file looks something like this. I'm actually going to delete everything that's in that file at the moment. And this is where we can tell our agent all about our project and provide very strict instructions. This actually forms part of the system prompt for this coding agent. So if there are any specific rules that you want the agent to follow, this is where you need to add those. If you want Open Code to set up that file itself, what you can do is run the command front slash init. And what Open Code will do is scan your code base, and then automatically create and set up this agents.md file. And once this is done, we now have our agents.md file with some instructions just kind of detailing the layout and text stack in this project. Now, what I actually recommend you do is just clear out this file and add as little information as possible. Only provide any strict instructions that the agent needs to follow. You'll thank me for it. What we are going to do is change this agents.md file drastically. What I'm going to do instead is add all of these rules to the agents.md file. I do want to mention, you can access all of this for free. I'll link to the GitHub repository in the description of this video. So if you want, you can simply copy my agents.md file and follow along. So what I like to do is to add this rule, keep your responses concise and to the point. Sometimes these agents are way too chatty, and we are paying for the tokens. So I prefer short and concise answers unless I ask otherwise. Then when the agent is in plan mode, it must ask clarifying questions. It should never assume design, tech stack or features. And if available, it must use sub-agents or background agents to assist with things like research. And it should also use background agents to review the different aspects of the plan before presenting it to the user. When the agent is in change or edit mode, it should never implement features itself. It should always use sub-agents. Now there's a really big reason for that. We want to protect the main agent's context window as much as possible. You will notice that even though this conversation is already short, we're already taking up like 10,000 tokens or 3% of the context window. And at some point, and usually past like the 50% mark, we start reaching the dumb zone of these agents, where the quality just worsens very quickly. So by using sub-agents, we're telling this main agent to delegate work and tasks to sub-agents, and then those sub-agents will only give back the final summary of what they did. That way we're keeping this main agent's context window as clean as possible. Also identify any changes from the plan that can be implemented in parallel, when using sub-agents to implement these features, act as a coordinator only. Use the best model for the task, use premium models for complex tasks like coding and mid-tier models for simpler tasks like documentation. After completing features, always run commands like lint, type check and next build to check code quality. Now this might not be relevant to our application necessarily, but if we had a database, we could enforce some database rules as well. And if we do have some testing framework or tools, we can have some section enforcing testing as well. Now for the UI design, you must always follow the UI design system when creating or reviewing components or pages. If you've ever had that issue with agents where the UI is not consistent across the app, you can simply use this approach to force it to use a very specific design system. So it is linking to this design.md file. So I am going to create that file in this folder called design.md. And I'm actually going to paste in this design system. And this just gives a lot of details on the, you know, colors, like the primary colors, and borders, and spacing, stuff like that. Again, you can download all of this from the description as well. And I actually have a lot of videos on building design systems that agents can use. Excellent. So just to make sure that everything does take effect, I'm actually going to create a new session. And that should pull in our new agents.md file with all of these new rules and conditions. I'm also going to create a commit called memory files and design system. Right, now that everything is set up, let's start building our actual application. So this is going to be a fun one. But first, a quick word from today's sponsor, Sintra AI. And I think you're actually going to care about this one because they solve a problem a lot of us have. Let's be honest, if you've been following the Agentic coding world, you know that shipping software is faster than it's ever been. You can go from idea to working product in a weekend with Cloud Code or cursor. The building part is basically solved. But then the product just sits there because the hard part isn't building anymore, it's everything else. The social posts, the launch emails, the newsletters, the product update announcements, stuff most of us didn't sign up to do. That's where Sintra comes in. Think of it as the other half of your business. It's a team of AI employees, each one trained as a specialist for the non-coding parts of running a product or business. I've been using two in particular. First is Soshi, the social media manager. I can tell Soshi about a new feature I just shipped, and she'll generate a week of posts across platforms in my voice with my positioning. I review, edit what doesn't feel right, and approve what does. The second is Emmy, the email marketer. I ask her to draft a product update email. She pulls context about my product from the Brain AI profile and writes the first draft. I still edit it, it's not magic, but it still beats staring at a blank email. There are 10 other helpers covering SEO, customer support, copywriting, data analysis, and more. And there's a 14-day money-back guarantee, so you can try the full team risk-free. Head over to sintra.ai/leenvansil. Link in the description. Or use code Leon van sale at checkout. You'll get an exclusive limited-time 72% off on all plans. That's centra.ai/leenvansil. Code Leon van Sale for 72% off. Thanks to Sintra for sponsoring this video. This app will allow the user to pass a rough idea of what they're trying to build, and our app will then suggest a text stack and certain aspects about the application. And at that point, we can kind of fine tune them using the UI itself. And by the end of it, we'll have a prompt that we can copy and pass to our coding agent. It's kind of a design tool or a spec tool where we can plan the project before handing it over to our agent. We can switch modes by holding shift and by pressing tab. This will switch us between build mode and plan mode. You'll also notice that there are different models for plan mode and build mode because sometimes you might prefer one model to do the planning and another model to do the actual implementation. So in planning mode, I'm actually going to pass this prompt. Again, you can just get this from the repo itself, but just to go over it, we're building a lightweight AI project planner app for this tutorial. The app should let a user enter a rough app idea, then generate an editable project brief with the app summary, the target users, core features, a recommended text stack, the pages and routes, a possible data model, the bold phases, risk and edge cases, and a final copyable starter prompt for a coding agent. And then we'll keep the scope small, so no user authentication, no database, no payments. It's just a simple Next.js application. I'm actually just going to switch out one thing here to say that we will be using the AI SDK with Open Router for inference. And then there's one more thing I would like to add. For the data model, it would be cool if you could visualize the relationship between the different entities as well. Perhaps using React Flow or something. All right, I'm just going to copy this prompt. Back in Open Code, let's paste in all of that. In fact, we do have to change the model. I definitely don't want to be using O. Lamar for this. I do recommend for padding that you actually go with the most intelligent model that you have access to. If the plan is detailed enough, you can actually use a less intelligent coding model. So for this, let's just paste in that prompt again. And let's send this. All right, this is a promising start. We can see that the agent pulled in the AI SDK skill, Next.js and the front-end design skill. And you know what, since we are using Shad C N, what we can also do is install the Shad C N skill as well. And that's kind of normal for my process. As I'm adding new frameworks and capabilities, I do like to look up an official skill for that framework or tool, and install it into the project. And now the agent is asking us a few clarifying questions. For the first implementation, should the AI response be generated all at once or be streamed section by section? I think all at once. Should drafts persist only during the current page session, or restore after refresh using local storage? Let's go with local storage. How interactive should the data model visualization be initially? It's a read only graph. I think it should just be read only. And let's confirm these. Right, we've got our implementation plan. No files were changed. All right, that's cool. Confirmed choices. Here we've got our architecture. We have to add a few dependencies, which is 100% correct. We have to install the AI SDK as well as the Open Router provider for the AI SDK. We'll install Zod for schema validation, and for visualizing the data model, we'll install React Flow using this package. And this seems very cool. We do have an open question though. Which open router model should be used as the default? If you don't care, I'll use an N var like Open Router model with a reasonable default documented in .env.example. Well, actually, what we'll do is let's find a model that we can use. I'm thinking that I'm simply going to use Sonnet 4.6. So let's copy this model name. For the model, use Anthropic Claude Sonnet 4.6. And let's send this. Right, we've got this plan, but this is actually just stored in the session at the moment. What I like to do is to persist this plan. So in this agent's folder, I'm going to create a new sub folder called Plans. Then what we can do is switch over to build mode, and I'm going to pull in the location to this plan folder. Please don't make any code changes yet. Please store this plan in the plans folder. And done. So in the plans folder, we now have our implementation plan over here. Which means we can now refer back to that plan in the future. It also means we can now clear this context by creating a new session. Let's pull in this plan, and then while we're in build mode, let's say, Please go ahead and implement this plan. That should actually be enough. Let's send this. And if it follows our agents.md files instructions, it should now split the plan up into tasks that can be executed in parallel, and then run background agents to implement this plan. Let's see. I'll read the implementation plan and project structure first, then split any independent work across sub-agents where it makes sense. Perfect. This is such a cool workflow. Nice. So it's created this to-do list, so we can see exactly what the agent is planning to do. And it's currently installing all of these dependencies. While the agent is busy figuring out this plan, we do have to set up our Open Router API key. So in the root of the project, let's create a new file called .env. And what the .env file allows us to do is to store any sensitive information. So this is typically stuff that shouldn't be deployed to GitHub or, you know, to some repository where anyone can view it. This is perfect for storing API keys. So what we'll do is create a new variable called Open Router API key. Then I'll go back to Open Router. I'll create a new key, and let's just call this AI Project Planner. And let's create the key. Let's copy this, and let's paste in that key. That's it, we're done. This will allow our application to interact with AI models. And also, by the way, if you were wondering if you can create your very own custom sub-agents, yes, you can. So by default, if we go to agents, we have a build agent and a plan agent. But we can add our very own custom agents as well. Let me just show you how. So I'm going to open up a new terminal session, then let's run the command open code agent create. I'll just create this agent in this local project, and now we can give it a description of what this agent should be able to do. So I'll just say call it Coder Agent. I don't know, just for fun, let's pass that. And this will generate a new agent for us. So if you do want to create agents that specialize maybe in only UI design, or coding, or testing, or documentation writing, you can definitely do that as well. And our main agent is making a lot of progress here. And as this tool tip says, if you want to see what the sub-agents are up to, we can press control and X and down to view our sub-agents. So currently we're looking at the general agent, and if we press right, we can see the explore agent. And if we press right again, we can see the other explore agent as well. And then we can simply press up to view our general agent.
[33:38]Well, cool. So the implementation is complete. So I'm going to open up a new terminal window. And let's manually start the dev server by running NPM run dev. And this is what we have. So not bad at all. We can enter our app idea, and hopefully on the right hand side, we'll see the AI results streaming in. So we even have a few example prompts. Let's go with a meal planning app for busy parents that create weekly grocery lists. Let's click on generate brief. So it is actually taking a very long time to get a result back. So it could just be that the model is taking a lot of time to generate the results, but it would be ideal if all of these sections would kind of stream in as they become available. And it took a minute or two, but we got our result back. So we've got our brief. The app name could be Weekly Bites. And the cool thing about this is that we can make changes by the way. So if we don't like Weekly Bites, we could make this, I don't know, Weekly Meals. We can edit the summary. We've got our target audience. Then we've got this JSON array of all the different core features. We also have this recommended text stack. So this is recommending Next.js with Tailwind. All right, that's cool. Pages and routes, and we've got the data model. So everything is here. But instead of showing a JSON array, I would actually prefer something different, maybe nice visual cards, instead of an array. Also at the bottom, we can see this data model graph. So for any of those of you who are interested to see what the database or what the data models look like, this is kind of a nice way to just kind of visualize those relationships. It's taking a long time to see any results coming in. Please could you use the AI SDK structured output streaming to stream in sections as they become available? Also, I don't like that you're representing the data as JSON arrays, show cards instead that translate to JSON behind the scenes that the user can easy edit, add or remove. I think this should be good enough. Let's fire this off. All right, then. So let's see if this changes work. I'll just select one of these other ideas. Let's click on regenerate brief. So this should stream back these elements as the model is generating them. So hopefully it will feel a little bit more performant. And actually, it is. We can see all of these different elements just like popping in one by one, as the model is generating them. Okay, that's cool. Again, we can make edits to any of these fields. Instead of the app showing us JSON, it is actually showing us different cards. And what we can do is remove cards. We can add additional cards if we want. And we can edit the contents within each of these cards. Cool. All right, our app seems to be making really good progress, but we're not done yet. It still doesn't look very good, so we are going to improve the UI. And secondly, I don't want to keep testing the app until the agent is ready. So we'll actually get Open Code to automatically test the app for us. What we can do is assign an MCP server to this agent. That will allow this agent to use a browser to navigate the app and test it. So I'll be using the Playright MCP server for this. All right, then what we have to do is scroll down to the Open Code section. Then we've got this documentation showing how to set up MCP servers for Open Code. But what we'll simply do is copy this code over here. All right, in order to add MCP servers to Open Code, what you have to do is in the root of your project folder, create a new file called opencoe.json. Then in this file, I'm just going to write the terminal. We can copy this configuration, like this Playright MCP over here. And paste it into this file. Now, you can call the MCP whatever you want, like Playright or, I don't know, Playright MCP. You can change that name to whatever you want. Now, what I'm going to do to test this is open up a new terminal window. Let's run open code. Then let's run the command MCPS. And here we can see the Playright MCP is currently connected. If we ever wanted to disable this MCP, we can simply hit the space bar. And that will disable Playright for this session. So instead, what we'll do is go back to our main session. Where we don't have the MCP server yet. What we actually have to do is exit out of open code, just to pull in this new configuration. Let's go back into open code. Let's go to MCPS. We can see Playright is indeed connected. And to resume that previous session, we'll just go to sessions. Let's choose our implementation session. Cool. Now we're back to where we were. So as the final part of this bolt, let's do something really freaking cool. The UI design really sucks at the moment. Use the front-end design skill to completely revamp the UI, including fonts, colors, styles, the works. Then I need you to test the app and visually confirm that everything is working by using the Playright MCP tools. Once done, also update the design.md file. Cool, this should be fun. Let's run this and see what we get back. All right, this is really cool. The agent made its changes, and it's now using Playwright to operate the browser. And I'm not touching anything by the way. The agent is driving the app at the moment. It's populating fields, clicking buttons, and testing our app on our behalf, end-to-end. So the agent will just kind of do testing on our behalf, making any fixes. And it's a brilliant way to kind of automatically improve and test your app on autopilot. Let me know down in the comments what you think about open code. Are you going to give it a shot? Also, if this video helped you in any way, please hit the like button and subscribe to my channel for more Agentic coding tutorials. If you do want to take your Agentic skills to the next level, consider joining my community, Agentic Labs. I just released the first few videos for my Agentic coding Masterclass where I teach you everything you need to know to build real-world applications using AI. If you ever do get stuck, we've got live Q&A sessions every Wednesday. We also have a vibrant community of over 700 AI builders, so someone will definitely be able to help you. Thank you for watching. I'll see you in the next video. Bye bye.



