Thumbnail for NASA's Artemis II Live Views from Orion by NASA

NASA's Artemis II Live Views from Orion

NASA

28m 4s4,261 words~22 min read
Auto-Generated

[0:00]Hello, everyone. This is a very special episode of the AWS show, because we are here at Revent, and we are talking all about launch announcements. How exciting is that? Joining me to do exactly that, is my good friend Simon, as ever, Simon. Welcome. Great to be here, and it is pretty exciting. Lots of stuff. There's so many announcements. There are so many announcements, but thankfully, we have split it up for you. So, we're going to dive deep into a particular area. We are going to focus on data and AI, which you know, everyone's talking about AI. It's really hot right now. It is. It is. It is indeed. And, yeah, lots of great stuff around data and AI for customers, for developers, for everybody, really. For everybody, really, but before we get into it, you may notice that we are in a slightly different location, which is just because we are here at Revent. You may also notice that I'm wearing a rather fetching jumper today. And that is for a very good reason, because this jumper, and a whole load of other really cool swag, is available for you to buy here at Revent. And the proceeds are going to a really good cause. So, make sure you check it out. But let's dive into these announcements, because that's what everyone's really here for. I'm assuming. Yes, let's let's do it. What's what's the first thing you want to talk about? Well, the first thing, the thing that everyone is talking about in the AI space, is obviously generative AI. And we've got some really cool stuff that we've announced at this Revent, regarding generative AI. And I think the first thing that really jumps out to me is the things that we're doing with Amazon Bedrock. Yeah, and there's a few key announcements with Bedrock. One is around additional models coming in, and the other is new features as well. Yes, absolutely. We've got loads of new models now available in Bedrock. So, the first thing we need to talk about, because I'm so excited about this, is that we have Mistral AI, their new models, are going to be available in Bedrock. How good is that? That's awesome. I mean, Mistral is a really powerful model. The Mistral 7B, the small one, and then the Mixtral 8x7B. So, really powerful models. It's great to have them. It really is. And, you know, it just gives you so much more choice, because that's what Bedrock is all about, right? Choice, flexibility, allowing you to choose the best model for your particular use case. And now we've also got some new models from Anthropic, because we've now got Claude 2.1 in there. Yeah, it's really good. And then we've also got some new models from AI21 Labs. As well as our own Amazon Titan models, which are also new. Yeah, exactly. So, and those include a couple of different Titan models, which are great for embedding, and then for text generation, as well. I mean, this is this is giving you so much choice. And what about the new features, though? Yes, so, new features, Bedrock now supports something called custom models, and it also supports something called continued pre-training. So, custom models are really cool, because this allows you to, basically, if you have your own data, and you have, you know, your own private data, and you want to use that to fine-tune a model, then you can do that with a custom model. And the cool thing about this, is that your data is not then used to train the general public model. It's just used to train your own specific custom model. So, this is huge for customers that have their own data, and they want to keep it private, and they want to use it to fine-tune models. And it makes them better. Yeah, it's just amazing. And the continued pre-training, that's really useful for people that already have a model, and they want to continue to pre-train it. So, this is this is great for both of those use cases. Exactly. And the other thing that's super cool about this, is that it supports, you know, all the different models within Bedrock. So, you're not just limited to one particular model. You can choose the model that you want to fine-tune, or continue to pre-train. Yeah. And all the same data privacy controls are in place. Your data is not leaving your VPC. It's staying secure. So, all those things that customers really want and need are there, and that's great. It is great. And, you know, that really helps build confidence in using these models, because data privacy is such a huge thing. Yes, it is. It is indeed. And, yeah, all those controls are there, which is great. So, let's move on to the next thing, which is Amazon Q. So, what is Amazon Q? Amazon Q is basically a new generative AI powered assistant that's specifically for business and for work. Yeah, this is a this is a massive announcement. And the cool thing about Amazon Q is that it's designed to be used in a whole load of different contexts. So, it's not just a standalone assistant. It can be embedded in your applications. It can be embedded in your services. So, for example, it can be embedded in things like QuickSight, and it can be embedded in things like Connect. And it can also be embedded in things like the AWS console. So, this is huge. So, you know, for example, if you're a developer, and you're using the AWS console, and you're trying to figure out how to do something, you can ask Amazon Q, and it will help you out. It will give you the right documentation. It will even give you code examples, and it will even, you know, help you troubleshoot issues. Yeah. It's almost like having a senior developer sitting next to you, who knows everything about AWS. Absolutely. It's like having your own personal AWS expert right there with you. And the other thing that's cool about it, is that it also understands your own private data. So, you can connect it to your own data sources, and it will understand that data, and it will use that data to answer your questions. And that's huge, because that means it's not just giving you generic answers. It's giving you answers that are specific to your business, and specific to your context. Yeah, and that means you get a much better answer, because it's based on your own internal documentation, your own data. So, it's just really powerful, and really useful. Absolutely. And it's also got a lot of security features built in, so you can control who has access to what data. And you can also, you know, control what kind of answers it gives, and what kind of information it provides. Yeah, and it's also integrated with your existing identity and access management systems, so it's not creating new security challenges. It's using your existing ones. Yeah, absolutely. So, that's Amazon Q. And it's a huge, huge announcement. And I think it's going to be a game changer for a lot of businesses. What else have we got, Simon? So, we've also got some new things around SageMaker. And, you know, SageMaker is our machine learning service. And it's been around for a while, but we've got some new features that are really going to help data scientists and machine learning engineers. Yeah, absolutely. And the first thing we need to talk about, is that we've now got SageMaker HyperPod. And what is SageMaker HyperPod? SageMaker HyperPod is a new capability that's designed to help you train large language models and other foundation models faster. And it does this by providing you with a fully managed infrastructure that's optimized for distributed training. Yeah, so it's basically taking away all the undifferentiated heavy lifting of managing your own infrastructure for training these large models. And it's giving you a fully managed solution, which is awesome. Yeah. So, it's really going to accelerate the training of these large models. And it's also going to help you reduce costs, because you're not paying for idle resources. Yeah, absolutely. And it's also got a lot of features built in for fault tolerance, so you don't have to worry about your training jobs failing. It will automatically recover from failures, which is huge for these long-running training jobs. Yeah, and it just makes it so much easier to get these models trained faster. So, it's really a big deal for people who are working with large models. Absolutely. And the other thing that's cool is that it supports all the popular frameworks, so you're not locked into one particular framework. You can use PyTorch, TensorFlow, whatever you're already using, and it will work with SageMaker HyperPod. Yeah, that's great. It's really flexible. What else have we got on the SageMaker front? So, we've also got SageMaker Canvas, which is now generally available. And this is a no-code/low-code solution for machine learning. So, this is for business users who want to build machine learning models, but they don't have any coding experience. Yeah, and it's really powerful because it allows you to build models using a visual interface, and it also allows you to connect to your existing data sources. So, you can, you know, easily bring in your data, and then build models on top of that. Absolutely. And it also has a lot of pre-built models and templates, so you can get started very quickly. And it also integrates with SageMaker Studio, so if you're a data scientist, and you want to, you know, take the models that business users have built, and then fine-tune them, or do more advanced things with them, you can do that. Yeah, it's a really good way to bridge the gap between business users and data scientists, and let them both work together on machine learning projects. Absolutely. So, that's SageMaker Canvas. And then we've also got SageMaker Studio notebooks, which now support JupyterLab 4. So, that's just a, you know, a new version of JupyterLab that's got a lot of new features and improvements. Yeah, and it just makes the notebook experience even better for data scientists, so that's a nice improvement. Absolutely. So, that's it for SageMaker. What else have we got, Simon? So, we've also got some new things around data analytics. And, you know, data analytics is a huge area for AWS. And we've got some new features that are going to help customers get even more value out of their data. Yeah, absolutely. And the first thing we need to talk about, is that we've now got Amazon Aurora Limitless Database. And what is Aurora Limitless Database? Aurora Limitless Database is a new capability that's designed to help you scale your Aurora databases to handle massive amounts of data and traffic. And it does this by automatically sharding your data across multiple Aurora instances. Yeah, so it's basically taking away all the undifferentiated heavy lifting of sharding your database yourself. And it's giving you a fully managed solution that automatically scales your database as your data and traffic grows. Yeah, and it's really going to help customers who have very large databases, or who have very spiky workloads, to handle those without having to do a lot of manual work. Absolutely. And it also supports both PostgreSQL and MySQL compatible editions of Aurora, so you're not locked into one particular engine. Yeah, that's great. It's really flexible. What else have we got on the data analytics front? So, we've also got Amazon Redshift Serverless now supports scaling to zero. So, this means that when you're not using your Redshift cluster, it will automatically scale down to zero, and you won't be paying for any idle resources. Yeah, that's a huge cost saver for customers who have intermittent workloads, or who have, you know, development and test environments. So, it's really going to help them optimize their costs. Absolutely. And it's also got a lot of other improvements. So, it's now got, you know, faster query performance, and it's also got better concurrency, so you can run more queries at the same time. Yeah, so it's just making Redshift Serverless even more powerful and more cost-effective. So, that's a great improvement. Absolutely. And then we've also got Amazon OpenSearch Serverless now supports vector engine. And what is vector engine? Vector engine is a new capability that's designed to help you store and search for vector embeddings in OpenSearch Serverless. And this is really important for generative AI, because vector embeddings are used to represent the meaning of text and other data. Yeah, absolutely. So, this is huge for customers who are building generative AI applications, because it allows them to, you know, store and search for these vector embeddings very efficiently. And it also integrates with Bedrock, so you can, you know, generate your vector embeddings using Bedrock, and then store them in OpenSearch Serverless, and then search for them. Yeah, and it just makes it so much easier to build these generative AI applications, because you don't have to worry about managing your own vector database. It's all fully managed for you. Absolutely. So, that's OpenSearch Serverless with vector engine. And then we've also got some new things around AWS Glue. So, we've now got AWS Glue Data Catalog now supports Apache Iceberg. And what is Apache Iceberg? Apache Iceberg is a new open table format that's designed to help you manage large analytical datasets in a data lake. And it provides you with features like schema evolution, time travel, and hidden partitioning. Yeah, so this is huge for customers who are building data lakes, because it allows them to, you know, manage their data much more efficiently and reliably. And it also integrates with a lot of other AWS services, so you can use it with Athena, Redshift, EMR, and a whole load of other services. Yeah, so it's just making it easier to build and manage data lakes with AWS Glue. So, that's a great improvement. Absolutely. So, that's it for data analytics. What else have we got, Simon? So, we've also got some new things around containers and serverless. And, you know, these are two very popular areas for AWS. And we've got some new features that are going to help customers build and run their applications even more efficiently. Yeah, absolutely. And the first thing we need to talk about, is that we've now got Amazon EKS Pod Identity. And what is EKS Pod Identity? EKS Pod Identity is a new capability that's designed to help you manage access to AWS resources from your Kubernetes pods. And it does this by allowing you to assign IAM roles to your Kubernetes service accounts. Yeah, so this is huge for customers who are running Kubernetes on EKS, because it allows them to, you know, manage access to AWS resources much more securely and granularly. And it also simplifies the process of configuring IAM roles for your pods, so you don't have to do a lot of manual work. Yeah, so it's just making it easier to run secure and compliant applications on EKS. So, that's a great improvement. Absolutely. And it also integrates with AWS IAM, so you can use your existing IAM policies and roles. Yeah, that's great. It's really flexible. What else have we got on the containers and serverless front? So, we've also got AWS Lambda now supports response streaming. And what is response streaming? Response streaming is a new capability that's designed to help you build applications that can stream responses back to clients incrementally. And this is really important for applications that need to return large amounts of data, or that need to return data in real-time. Yeah, absolutely. So, this is huge for customers who are building real-time applications, or who are building applications that need to return large amounts of data. And it also helps improve the user experience, because users don't have to wait for the entire response to be generated before they start seeing data. They can start seeing data incrementally as it's being generated. Yeah, and it's really going to help with the performance of these applications. And it's also going to help with the cost, because you're not keeping the Lambda function active for as long. Absolutely. And it supports all the popular runtimes, so you're not locked into one particular runtime. You can use Node.js, Python, Java, whatever you're already using, and it will work with response streaming. Yeah, that's great. It's really flexible. What else have we got on the containers and serverless front? So, we've also got AWS Step Functions now supports optimized integrations for 200 AWS services. And what does that mean? That means that you can now integrate Step Functions with 200 AWS services directly, without having to write any custom code. And this is really important for customers who are building complex workflows, because it allows them to, you know, orchestrate their workflows across a lot of different AWS services very easily. Yeah, absolutely. So, this is huge for customers who are building event-driven architectures, or who are building microservices. And it also simplifies the process of building these workflows, because you don't have to write any custom code to integrate with these services. Yeah, and it just makes it so much easier to build complex workflows with Step Functions. So, that's a great improvement. Absolutely. And it also supports a lot of new features, like, you know, error handling, and retries, and parallel execution. So, it's really making Step Functions even more powerful and robust. Yeah, and it's just making it easier to build resilient and scalable workflows. So, that's a great improvement. Absolutely. So, that's it for containers and serverless. What else have we got, Simon? So, we've also got some new things around networking and content delivery. And, you know, these are two very important areas for AWS. And we've got some new features that are going to help customers build and run their applications even more efficiently. Yeah, absolutely. And the first thing we need to talk about, is that we've now got Amazon VPC Lattice now supports multiple accounts. And what does that mean? That means that you can now use VPC Lattice to connect services across multiple AWS accounts, and even across multiple organizations. And this is really important for customers who have complex architectures, or who have, you know, microservices spread across multiple accounts. Yeah, absolutely. So, this is huge for customers who are building multi-account architectures, or who are building microservices. And it also simplifies the process of connecting these services, because you don't have to manage a lot of complex networking configurations. Yeah, and it just makes it so much easier to build and manage complex applications across multiple accounts with VPC Lattice. So, that's a great improvement. Absolutely. And it also supports a lot of new features, like, you know, traffic routing, and load balancing, and health checks. So, it's really making VPC Lattice even more powerful and robust. Yeah, and it's just making it easier to build resilient and scalable applications. So, that's a great improvement. Absolutely. So, that's it for networking and content delivery. What else have we got, Simon? So, we've also got some new things around security and identity. And, you know, these are two very critical areas for AWS. And we've got some new features that are going to help customers secure their applications and data even more effectively. Yeah, absolutely. And the first thing we need to talk about, is that we've now got Amazon Inspector now supports code scanning. And what is code scanning? Code scanning is a new capability that's designed to help you identify security vulnerabilities in your application code. And it does this by automatically scanning your code for common vulnerabilities, like SQL injection, and cross-site scripting. Yeah, so this is huge for customers who are building secure applications, because it allows them to, you know, identify and fix security vulnerabilities in their code very early in the development process. And it also integrates with your existing CI/CD pipelines, so you can automate the process of scanning your code. Yeah, and it just makes it so much easier to build secure applications with Inspector. So, that's a great improvement. Absolutely. And it also supports a lot of different languages, so you can use it with Java, Python, Node.js, and a whole load of other languages. Yeah, that's great. It's really flexible. What else have we got on the security and identity front? So, we've also got AWS KMS now supports external key stores. And what does that mean? That means that you can now use your own external key management systems with AWS KMS, to manage your encryption keys. And this is really important for customers who have very strict compliance requirements, or who have, you know, very specific key management needs. Yeah, absolutely. So, this is huge for customers who need to have full control over their encryption keys, and who need to meet very strict compliance requirements. And it also integrates with your existing key management systems, so you don't have to migrate your keys to AWS KMS. You can use your existing ones. Yeah, and it just makes it so much easier to manage your encryption keys with KMS. So, that's a great improvement. Absolutely. And it also supports a lot of new features, like, you know, key rotation, and key policies, and access control. So, it's really making KMS even more powerful and robust. Yeah, and it's just making it easier to manage your encryption keys securely. So, that's a great improvement. Absolutely. So, that's it for security and identity. What else have we got, Simon? So, we've also got some new things around management and governance. And, you know, these are two very important areas for AWS. And we've got some new features that are going to help customers manage and govern their AWS resources even more effectively. Yeah, absolutely. And the first thing we need to talk about, is that we've now got AWS Config now supports custom aggregators. And what are custom aggregators? Custom aggregators are a new capability that's designed to help you aggregate configuration data from multiple AWS accounts and regions. And this is really important for customers who have complex architectures, or who have, you know, resources spread across multiple accounts and regions. Yeah, absolutely. So, this is huge for customers who need to have a centralized view of their AWS resource configuration. And it also simplifies the process of auditing and compliance, because you can, you know, easily see all your resource configurations in one place. Yeah, and it just makes it so much easier to manage and govern your AWS resources with Config. So, that's a great improvement. Absolutely. And it also supports a lot of new features, like, you know, compliance rules, and remediation actions, and reporting. So, it's really making Config even more powerful and robust. Yeah, and it's just making it easier to maintain compliance and security. So, that's a great improvement. Absolutely. So, that's it for management and governance. What else have we got, Simon? So, we've also got some new things around developer tools. And, you know, these are two very important areas for AWS. And we've got some new features that are going to help developers build and deploy their applications even more efficiently. Yeah, absolutely. And the first thing we need to talk about, is that we've now got AWS CodeCatalyst is now generally available. And what is CodeCatalyst? CodeCatalyst is a new unified software development service that's designed to help you build, deploy, and operate applications on AWS. And it provides you with a single console for all your development activities, including source code management, CI/CD, and issue tracking. Yeah, so this is huge for developers, because it allows them to, you know, manage their entire software development lifecycle in one place. And it also integrates with a lot of other AWS services, so you can use it with CodeCommit, CodeBuild, CodeDeploy, and a whole load of other services. Yeah, so it's just making it easier to build and deploy applications on AWS with CodeCatalyst. So, that's a great improvement. Absolutely. And it also supports a lot of new features, like, you know, project templates, and workflows, and notifications. So, it's really making CodeCatalyst even more powerful and robust. Yeah, and it's just making it easier to build and deliver applications faster. So, that's a great improvement. Absolutely. So, that's it for developer tools. What else have we got, Simon? I think that's it. That's a lot of announcements. We've covered a lot of ground today. And I think we've covered the most important ones for data and AI. Absolutely. And, you know, we've only just scratched the surface. There are so many other announcements that we haven't even touched on. So, make sure you check out the AWS News blog, and also the Revent website, for all the other announcements. But thank you so much for joining me, Simon. It's been a pleasure. It's been a pleasure as always, and it's been a lot of fun. So, thank you. Absolutely. And thank you for watching. We'll see you next time.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript