Thumbnail for BREAKING: US Military INSIDER LEAKS The Strategic Bitcoin Reserve BATTLE is SPREADING | EP 1489 by Simply Bitcoin

BREAKING: US Military INSIDER LEAKS The Strategic Bitcoin Reserve BATTLE is SPREADING | EP 1489

Simply Bitcoin

23m 28s3,607 words~19 min read
Auto-Generated

[0:00]Hello, everybody, and welcome to the Marketing AI Show. My name is Paul Roetzer, and I am your host, along with Mike Kaput, my co-host. Hello, everyone. Thanks for joining us. And we are here for episode 68. The last episode, number 67, we published a big list of resources related to large language models, and just a lot of people getting ramped up. And it was such a great response that we decided we would do something similar today. So today we are going to dive into the topic of how do you actually learn AI? How do you stay up to date on AI? So we're going to share a number of different resources and approaches that we've found successful over the years. Because it is an ever-changing industry, it's very difficult to stay on top of it. And so our hope is that by sharing some of these resources, you can find a way to make it more manageable and to make sure that you and your team are staying up to date on everything happening in the space. But before we dive into that, it is tradition, of course, to talk about the headlines of the week, as it is a very fast-moving space. So Mike, what is happening this week? This week's biggest news, in my opinion, actually comes out of a blog post from Microsoft, and they published a really interesting paper describing how GPT-4 shows sparks of artificial general intelligence. So this was a 155-page paper from a huge team of Microsoft researchers. And what they did is they basically tested GPT-4's capabilities across a whole range of areas, like math, coding, vision, medicine, law, psychology, and more. And the research found that beyond just its fluent and coherent language, GPT-4 can solve novel, difficult tasks that broadly span many domains, and it does so without specific prompting for those tasks. And these are also tasks that it hasn't specifically been trained on. So the researchers say that GPT-4's performance on these tasks is actually qualitatively different from previous AI models, and that it actually approaches human-level performance in a lot of these areas, and that it starts to demonstrate an early form of artificial general intelligence, or AGI. Now, the paper is pretty clear to also point out that GPT-4 is not fully AGI, and it still has significant limitations, including hallucination, a lack of access to real-time information, and other problems. However, they note that the breadth and depth of GPT-4's capabilities actually suggests that it can reasonably be viewed as an early, incomplete, but still significant step toward AGI. So Paul, this is pretty big news because AGI is, you know, the holy grail of artificial intelligence. So what do you make of these findings? Yeah, it is big news, and it was a really fascinating paper, and it was actually a bit of a surprise because Sam Altman, the CEO of OpenAI, had said a few weeks ago that they weren't working on GPT-5. They were working on other things, and that this was kind of like it for a while with GPT-4, and that they weren't even focused on AGI. And then this paper comes out from Microsoft that says, yeah, GPT-4 is showing sparks of AGI. So I think one of two things is happening there. Either Sam was, you know, just saying that because he wanted people to calm down a little bit on the hype, or perhaps OpenAI themselves don't fully understand the breadth of capabilities of what they built. And that sometimes is what happens is that these models emerge and the creators themselves don't fully realize what they've built. It's like something just kind of came out of the training, and they're like, oh, we didn't expect it to be able to do this. So it's super fascinating. The other thing is that AGI, artificial general intelligence, it means different things to different people. So if you read the paper, they don't say it is AGI, they say it is showing sparks of AGI. And they also give a definition that it is, quote, "a hypothetical AI system that can perform any intellectual task that a human being can." So it's really important to realize that there are many steps in between here. And even if this is showing sparks of AGI, that doesn't mean it is fully AGI. And it also doesn't mean that we get to what's often referred to as artificial super intelligence, which is a system that can perform any intellectual task that a human being can, but at a far superior level. So this is something that a lot of people will refer to as a singularity, where human intelligence is exceeded by machine intelligence. We are not there, and we're not close to that, but it is fascinating to see that something has changed with GPT-4 and its ability to reason. And that's really what the paper is about, is its ability to learn and reason on its own without specific instructions or specific training to do so. So it's a huge finding, and it's definitely something we're going to keep an eye on. Speaking of reasoning, Google DeepMind announced something new called Retro, which stands for Retrieval-Enhanced Transformers. And this is a new type of large language model that DeepMind says achieves similar performance to models that are ten times larger. So what Retro does is it can actually learn to search an external database or knowledge base for information to use to answer queries. So instead of trying to internalize all the world's knowledge within its parameters, it has the ability to effectively perform its own Google search to find relevant information. Now, the reason this is a big deal is because by not trying to store all that knowledge in the model itself, Retro is faster, more accurate, and more computationally efficient. The implications for things like real-time search and real-time knowledge are pretty massive. Paul, what do you think this means for the future of large language models? I mean, this is the Holy Grail. This is what everybody has been waiting for. Because as you mentioned in the earlier headline, one of the biggest limitations of these large language models is they don't have real-time data. So ChatGPT's knowledge cut off is sometime in 2021, and so it can't tell you about recent events. So this is how you start solving that problem. So instead of having to retrain the model and build another GPT-5 to bring in 2022 and 2023 data, you build a large language model that is optimized to be able to go get the relevant information in real time. And so then you're marrying the real-time knowledge with the reasoning capabilities of the large language model, and that's when things just go to a whole new level. So DeepMind has been working on this for years, and this is finally kind of the breakthrough that people have been waiting for. So very exciting to see this come out and to see this applied, I think, very quickly to Google's next iteration of their models and their search capabilities. Finally, the last big piece of news this week is that Stability AI, which is the startup behind the open source image generator Stable Diffusion, announced a partnership with Amazon Web Services. Through this partnership, Stability AI is actually going to be using Amazon's custom-built machine learning chips to actually train and build their next generation of AI models. So this is pretty significant because it means that Stability AI, which is behind one of the most popular generative AI models out there, is moving away from the chips that are developed by Nvidia, which currently dominates the market for chips that power AI models. Paul, what do you think about this move by Stability AI? This is a really big deal because we have seen an insane demand for AI chips, which are generally specialized graphics processing units, GPUs, like you mentioned from Nvidia. And what that's done is it's kind of created a bottleneck where these companies are trying to train these models. And so if you're Stability AI and you're building generative AI models, you need a lot of compute, and it's very expensive. So being able to leverage something like Amazon's custom-built machine learning chips could significantly reduce their costs, which then would enable them to train larger, more capable models for less money. And that also means that people that use their open source models or pay for their models, they're going to get better capabilities at a more efficient price. So I think this is a trend we're going to see a lot more of, where companies like Google and Amazon and Microsoft and even Apple are going to be building their own custom chips for AI. And it also reduces the power of Nvidia in this space, which has been a great run for them. But the market is just going to shift really fast here, so interesting to see this deal go down. Alright, so that's the headlines. Now we're going to dive into the topic of how to learn and stay up to date on AI. And as I mentioned, we've gone through this ourselves, trying to build the playbook for what we do, and we wanted to share this with you and encourage you, if you have ideas, other ways you're doing it, to please share them with us, and we will update this. We're going to compile it into an article as well on our site, so you'll be able to see all these resources. But the first thing I would say is, for me, this always starts with setting clear learning objectives. And this is true for anything. When you're trying to learn something new, you have to think about, what do I actually need to learn and why? And that why is really important because if you don't have a strong enough why, you're not going to stay dedicated to it. So for us, we need to know the why so that we can help our clients at our agency. We need to know the why so we can help the industry as a whole. We need to know the why so we can teach the courses we teach at the university level. So for us, there's a strong why as to why we have to stay on top of this stuff. So I would encourage you to figure out what that why is for yourself, for your team, for your organization, because that is what's going to enable you to dedicate the time to doing this. The other thing is, you have to prioritize what areas of AI matter to you and your work. You can't learn everything, because everything is always changing. So for us, for example, we're really honed in on large language models and natural language processing. We're honed in on things like generative AI because we see the immediate applications for marketing and business. So that's what we prioritize, because it's what matters to us and our work. And the last part is, then you have to integrate AI into your daily workflow. So you have to make sure that as you're using this stuff, you're actually experimenting and applying it, which we're going to dive into further. But for me, that's step one, is really establishing those learning objectives before you dive into any of these resources. So, Mike, what would you say is next once people kind of have that understanding? Next, and this might seem obvious, but it's really important to build a strong foundation of AI knowledge and understanding. And what I mean by that is you don't necessarily need to be an AI researcher or a data scientist to learn about AI, but you do need to understand some basic concepts and terms about how this technology works. And this is important because it really helps you understand the news, what's hype, what's actually important, and how these tools and capabilities can be applied to your business. So one really important way to do this is to take an introductory course on AI. And there are a lot of really great free options out there that we're going to include in our article that accompanies this episode. Some of my favorite include Google's Introduction to AI and Machine Learning course, which is available on Coursera. There's also Andrew Ng, who is a very prominent researcher. He has a great course on Coursera called AI for Everyone. He also has a ton of content on his company DeepLearning.AI. And then finally, if you're a marketer, the American Marketing Association also has a great introductory course on AI, which is part of their AI for Marketers series. Now, the other thing that's really important to do to build this foundation is to follow a few core publications that don't always try to hype up everything about AI, but really focus on the facts and what's important. And so a couple of these that I love are MIT Tech Review's The Algorithm, which is their newsletter, and the Artificial Intelligence newsletter from the Wall Street Journal. These are really good to understand kind of the underlying news without all of the kind of hype. And then finally, I would recommend reading some introductory books on AI. A couple of my favorites include Prediction Machines, The Simple Economics of Artificial Intelligence, and then another one is called Human Compatible, AI and the Problem of Control. What would you recommend for those who have a solid foundation and want to stay up to date? So once you have that solid foundation, then it's really about building your own customized intelligence engine to track the space. And I've talked about this before, but this is a systematic approach to how you go about actually finding the news, filtering the news, and then consuming the news, and then figuring out how to synthesize it and apply it. So it starts with identifying key news sources. And Mike mentioned a couple of them that are kind of the more foundational ones. But once you have the foundation, you can start diving into some other ones. So a few others that I personally recommend is VentureBeat's AI coverage. They do a great job. Singularity Hub is another one that covers a lot of exponential technologies, but AI is a core of that. And then also, if you're on LinkedIn, I would highly encourage you to follow some of the key leaders in the space. So people like Yann LeCun, who's the chief AI scientist at Meta. He's very active on there. Andrej Karpathy, who was the director of AI at Tesla, he's also now at OpenAI. He's very active and shares incredible resources. And then others, like Dario Amodei, who is the CEO of Anthropic. He's also very active, as is Sam Altman, who's the CEO of OpenAI. These are people that when they say things, it's very relevant. So I would definitely follow those people and then other industry leaders. Also, subscribe to newsletters, like our Marketing AI Show newsletter. It's free, it comes out every Friday. We summarize a lot of these things for you. And then find yourself a news aggregator. So I use Feedly. It used to be Google Reader, for those of you that remember Google Reader. But Feedly does a really good job of letting you custom build your own news sources, so you can put all those sources into one place. And then the next part is, you need to dedicate time. And I can't stress this enough. This is a time commitment. So I have dedicated time every morning, 6 a.m. to 8 a.m. is my news and research time. So two hours a day, Monday through Friday, is what I dedicate to this. And that's how I stay up to date, is I literally have blocked time on my calendar every single day to do this. And so then when you're doing that, you're going to use active reading and critical thinking. So it's not just reading the headlines. You're going to have to click into the source material. You're going to have to dive into the research papers. You're going to have to dive into the patents. You're going to have to think critically and go against the narratives that are being created. Because a lot of times, the media gets it wrong. So you have to go to the original source. And then you want to take notes and synthesize the information. So I use a tool called Pocket, P-O-C-K-E-T, that's free. And I save articles to that and tag them. I also then extract key information into a note-taking app. I use Google Keep for that. And then I use a tool called Research Rabbit to keep up on research papers. So that is kind of the systematic way to build your own customized intelligence engine. It's how we build the Marketing AI Institute, it's how we build the courses, it's how we build this show, is through that process. So that's for staying up to date. What about for actual experimentation and application? So once you've built your foundation and you've found ways to stay up to date, it's really important to find ways to actually experiment and apply AI, both with your personal work and your professional work. Because it's one thing to read about it and talk about it, but until you actually get your hands dirty, you're not going to understand the true power or limitations of the technology. So there are a couple ways to do this. First, you need to experiment with AI tools and platforms. And what I would recommend here is to start with large language models, like ChatGPT, because they are incredibly powerful and have immediate applications in your day-to-day work. So start using ChatGPT for things like writing headlines, optimizing existing content, helping you brainstorm. Whatever it is, you can almost certainly find ways to integrate it right away. I would also recommend exploring other types of generative AI, such as image and art generators like Midjourney or Stable Diffusion, to just understand what they're capable of. Another way to experiment and apply AI is to actually find and attend AI-focused events and workshops. We, of course, run the Marketing AI Conference, MAICON, every year, which is a great place to go to kind of get the foundational information and understand the trends. But there are tons of other events and workshops that are often put on by companies like Microsoft or Google or Amazon. And these are great opportunities to learn directly from the builders of the technology. And then finally, I would recommend finding ways to collaborate and share your knowledge with others. So one really simple way to do this is to share articles with your team and get their opinions or talk to them about how you might use these tools. You can also join online communities dedicated to AI, such as subreddits on Reddit or Discord servers, where there are tons of people who are using this tech every day and sharing really cutting-edge strategies and approaches. And then finally, I would recommend trying to find opportunities to educate others at your organization about this technology. As Paul always says, the best way to learn is to teach. So if you can go give a lunch and learn or talk to your team about what you've learned, that's a great way to deepen your own understanding. Paul, anything else you would add on the experimentation front? Yeah, I think I would just stress that the only way to actually learn this stuff is to use it. So the first two steps, setting clear objectives and building a strong foundation, those are important. And then staying up to date, that's important. But it's nothing compared to actually using the tools and seeing what they can do. And this is true for anything. If you are learning to code, you have to code. If you're learning to write, you have to write. If you're learning about AI, you have to actually go use the AI. So I would encourage you to dedicate 15 to 30 minutes every day, if you can, to actually experimenting with tools. And it does not have to be these advanced things. It can be as simple as, I need to write an email today, I'm going to use ChatGPT to help me write that email. I'm going to need to come up with ideas for an upcoming blog post, I'm going to use ChatGPT to help me come up with the outlines and the ideas. I'm going to write a speech, I'm going to build a presentation, I'm going to outline a course. Whatever it is, find ways to apply it to your existing work and you will be amazed at how quickly you learn the capabilities of this stuff. So those are our main tips for how to learn and stay up to date on AI. Again, we're going to have an article with all the compiled resources there, so you don't need to try and write everything down. Just go to the Marketing AI Institute site, and we'll have that link there for you. So that is it for episode 68. Thank you again for listening, and we will be back next week with another episode of the Marketing AI Show.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript