Decode AI

AI Ethics, Video Disruption & Hacking Bots – Google I/O 2025, Claude 4 & More

Michael & Ralf Season 1 Episode 15

Send us a text

In this episode of Decode AI, hosts Michael Plettner and Ralf Richter explore the latest advancements in artificial intelligence, including Claude 4's safety features, Google's VU3 for video production, the introduction of local AI models with AI Edge, and the reasoning capabilities of Gemini 2.5. They also discuss Project Astra's real-time assistance, the implications of AI in cybersecurity, and the legal challenges posed by AI-generated content. The conversation highlights the rapid evolution of AI technology and its impact on various industries.


Takeaways

Claude 4 introduces advanced guardrails for AI safety.
Google's VU3 democratizes video production, making it accessible to all.
AI Edge allows users to run models locally on devices.
Gemini 2.5 enhances reasoning capabilities for complex tasks.
Project Astra integrates real-time AI assistance into daily life.
AI agents are outperforming elite human hackers in cybersecurity.
Legal implications arise from AI-generated court cases.
Ethics in AI remains a complex and evolving challenge.
AI is transforming traditional search into an interactive experience.
The rapid evolution of AI technology is reshaping industries.

Reference Links

AI, Microsoft Build, OpenAI, language models, AI development tools, hardware advancements, Google Gemini, technology development


Hello and welcome to Decode AI, your podcast diving into fascinating, complex and ever-evolving world of artificial intelligence. Each episode, Michael, and I bring you insights from both the consulting and the technical frontiers. whether you are here to sharpen your business strategies. or deepen your technical understanding. We've got you covered. Grab your coffee and sit back. And let's decode AI. Hi, I'm Michael. I'm technically, a digital transformation. I don't know. You can put in bullshit bingo here. I'm more the technical, not the technical guy. That's the point. I'm more the business point, business part here. And I can help you to understand some ideas, figure out how to put it into your business and get AI work for you. bring it into your business and work smarter, not harder. And Ralph? Yeah, I'm Ralf, your resident tech wizard from coding to GitHub, Co-Pilot and Deep AI technology. I'll ensure we're grounded in reality. Well, the digital one anyway. So let's start with our first topic for today. And to give you a glimpse into our whole episode, we will talk about a lot of Google, especially the Google I.O. conference we had a couple of days, weeks, years ago, depending on whenever you listen to the podcast. But we would like to talk about the latest parts for us and our highlights. So we would like to start with Claude 4. And it feels like this is more about the AI safety instead of just we have a new model or something like that. Indeed, Claude 4 introduces advanced guardrails against AI misuse. So think of it as AI's own ethics, manually encoded directly in the system, essentially making AI safer by design. So that's the clue at what Claude 4, Anthrophic's, Leap in AI safety is about. That's interesting because we had some previous episodes talking about some security issues also with MCP service. So it's interesting to see that companies get a focus on security points and not only developing faster, cheaper, better models. So this is something which will have... a real huge impact into businesses itself. If you have to decide between, I wouldn't say unsecure, but if you have to rely on security, you may choose something which is more secure. And on the technical part, Well, it is not so much about technique. It's looking for ethics, so challenging itself, whether it is an ethical approach, which is commonly accepted or not. So it is going on to the ethical standards and it is challenging itself by reinforced learning. And that brings a lot into safety when it comes to ethical questions using AI. which is a huge leap forward to bring it on a safer space for everybody. And I mean, that's an important thing here, Yeah, well, don't get me wrong. I totally agree about every, from my point of view, every system has to follow a specific ethical standards. The well at the beginning of my sentence is actually we have a lot of different ethical standards if you look at different regions in the world, for example. So there are some differences in the culture, in the requirements. And that's the point why you have to decide actually, yeah, is it what you're looking for? And it means a broader business adoption, I would say, if you are in the exact same area and it will help to shape the market actually. And it's interesting that we, from my point of view, to see this focus on specific areas to improve the, maybe the way how models get treated and what's coming out. And it sounds like unethical for the people. Yeah, what sounds unethical. It's like that AI gets its own moral compass to navigate through all that ethical jungle. And I think it is more or less for the Western Hemisphere, less towards the Eastern Hemisphere, where maybe a different understanding on ethics is available, not on all points, but on some. Yes, can see the changes in Claude are going to help running in a Western Hemisphere, more European related, more Western related. So that's true for sure. And there are differences or differences in all models up from where they made it. And it is assured that they differ. from each other, country by country. And then we have another cool topic, and you've announced it by the introduction already, that we're talking a lot about Google. Google's stuff is really cool. And some of you may noticed in social media already that there are ton of millions of AI-generated videos available for now. And it's making a big wave that Google's VU3 is there. and is allowing users to generate a ton of AI videos in just a few days. And Google itself is understanding it as a democratization of AI-powered media production. So that's pretty tremendous what's happening out there with Vio3. Well, I would call it actually a kind of revolutionizing thing because this is not only democratizing and liberating or I don't know how we can put it in positive phrases. For the media production, it's also something which is incredible. If you see the first videos generated by this AI, not only was it plain video, but also with voice, with music. So you get the full video with audio and video experiences. And you get a video which is basically not different from any professional directed video from quality, from how people look like. It's not the typical issues like six or eight fingers or some arms and additional arms and legs or something like that doing motion. It's something which is really, really, really powerful. And if you take a look in the business part, we have companies who create actually those videos to earn money. if you have a... How can I say this? A pretty cheap help in this area. I know... If you take a look at the license prices, about, I think it's in a larger, I'm not sure, 250 or 299 dollars per user per month. It's not cheap, but compared to a professional who creates videos like this, with artists, with lights, with audio, with all the stuff you need to create a brilliant video. Whee! That will change and disrupt the media circus definitely. Yeah, it feels like that Vue 3 is the enabler to bring a disruption into all this stuff in the, into the media world, because it is like going to be that AI generated media is more and more becoming mainstream. And technically, technically spoken, is a diffusion model, which is very optimized for the video data production and is going to allow you to rapidly generate coherent and contextually relevant videos. It's like having an entire film studio at your fingertips. So that's pretty cool because that means it is pretty easy to use and brings up technically good results with a high quality at the end of the day. And that's, that's the thing here. Absolutely. yeah, as we always say, there's a huge opportunity in this, especially for companies who work with the new technology. So they can add specific scenes, some specific parts into production, real world production films. But it's definitely something which is disruptive ground-shaking, I would say, for video creation. think we still have some limitations, right? We still have a better movie or a better scenery if a human can interact and maybe improvise into space. in some specific scenarios. So it's definitely something different. But, and you know you can ignore everything before the but if there is a but in the sentence, you get some impressive results from AI. And as you said on the fingertip, it's just there. It takes some minutes. In the near future, it will definitely change. the way how we think about videos. Don't think about social media, that's scary actually to think in this scenario. Anyways, the businesses have to adapt and use it for the good and to improve a lot in the business. Yeah, true story. Well, and then there's something which is maybe, I don't know, it's not a secret anymore, but it was released pretty quietly from Google, the local AI model, actually, which is, we had a discussion about small language models already, which feels definitely like Google. did something which is a good competitor to AI, the Apple Intelligence feature. You missed that, right? that Apple is so much stepping on the same place every time, so they're not coming forward. really far behind. I love Apple for the whole ecosystem and integration, but on the AI perspective, it's absolutely, and it's a shame they call it Apple Intelligence to associate AI with real AI. my gosh. I think that hurts a little bit. To you. Coming some years up, you need something, a break or so. Shall we pause the video? I'm sorry. this is the, the app is actually the AI Edge Gallery app. And it enables users to download and run AI models directly on their Android device. For instance, yes, without needing an internet connection. Mm-hmm. impressive. Yeah. So it was a bit like a sneak peek into something. So Google didn't announce it broad and was loud with it. But it is like Michael said, you can now run an AI model in your pocket. And that's tremendous. So Google released a so-called AI Edge Gallery. And therefore you can download a model which is a light RT enabled or is capable to run in the light RT runtime and your device is supporting that. So there are some limitations. We don't want to say that you can run everything on your phone now. When that's done and that's set, you can literally try out the most models which are capable to run on light RT runtime in your pocket. And that's pretty cool. Technically, is really that you run an AI model efficiently on your user's device and you can import whatever you want there. it has, as I said, some limitations, but it is also providing you with a high flexibility regarding models and stuff you can do on your custom applications and stuff. So it's really, I'm looking forward to get my hands on it. Didn't have the time yet to try it out, but sounds promising to me. And the fantastic, fantastic story behind this, and that will make Michael very happy. There is the promise that the iOS app version is expected to be released soon. So you finally get real AI on your app iPhone. Yeah, and Apple will not allow to replace the existing AI technology they implement. Anyway, different story. I like the idea to have AI for specific needs, especially in some areas. We live actually in Germany where we don't have... good Wi-Fi or internet connection all the time. So if you have a limited connection or you just want to remove the footprint and use online connectivity. And that's something which sounds pretty interesting for me just to have it right into my pocket. I don't have to care about where am I right now. I'm in an elevator in... the underground or just traveling through Germany. So there are still different scenarios and it will still work. That's fantastic. Yeah, also is touching other areas like emphasizing privacy and not only the offline capability. So it is also regarding privacy a lot and it will definitely change the way applications are developed anyways. I hope this will come to my iPhone pretty soon. Otherwise we can recommend you an Android device anyways. So I'm running both and I like Android still more. Yeah, but you know, I live in the whole ecosystem and that's so, so much better. You live on a Windows device, right? I don't, for good reasons. Some parts are good. The quality of the OS. Okay. I'm great with that. So Google then there was more on Google IO 2025. So we had two of them now and we should have a look on to other five top AI announcements and the future impact of Google IO 2025. The landmark event when it comes to Google technology. So I think we already talked about this. Let me stick to the order. I do it. Let me stick to the order. But I would like to have the fifth topic, The fifth. No, but you can stick to the order. So the first part is reasoning. Reasoning is coming to Gemini. with Gemini 5.... Sorry, Gemini 2.5 and not 5.2. With Gemini 2.5 Pro, deep thing, you can get reasoning into Gemini. And I've heard so much good parts about Gemini over the last months, unfortunately. And... I'm afraid to see Gemini coming to docs and Gmail, connecting some dots like Microsoft did with Copilot. which they didn't do good. Oh, I'm sorry. Yeah, yeah, yeah. um it's not so bad as Apple is doing with Apple Intelligence. There are still someone on the market doing it more bad than, than them. Okay. Great. So Jimini 2.5 pro introduces a deep thing and is enabling AI to perform complex multi-step reasoning tasks, which is a thing in chat. GPT is already available and get up. Copilot is already available. And now Google's Jimini is also capable to do so, but you need to pay for it. on the free version, you won't get it. So you have to pay like, what is it? 24 bucks a month for Gemini Pro. Sorry, didn't get it yet. Sorry, it is 22 bucks in Europe to get the feature. But it is a tool made for researchers and developers in the mainstream and will give them the opportunity to go on. reasoning tasks, which are over multi-steps executed then to assist them by researching. So, and as Michael said, across Google's platform, it'll connect the dots to make users capable running over their docs, emails, appointments whatsoever, and work over that. That's pretty cool. Yeah, and another cool story is that the AI is coming to Google search and that's going to be personalized now and will be very interactive. Many of you may experience it already that Gemini is sitting in Google search and that's going to be more and more that way. And that means that the search itself will transform into a more interactive platform providing personalized responses based on your users of the context of the user, your location, whatever you did before, your behavior. I don't know yet if I will like it or not. mean, you know, was sometimes a struggle when you had a search history and you need to clean it up to get valid results back because someone misused your Google account or was searching on the same IP address you are using. I don't know yet what that will mean to us, but it will go and will be multimodal. That means you can have tags, you can have voice, you can have images, and we'll make the searches more intuitive. Well, I'm not sure if it really makes search intuitive. You know, two years ago when we started to explain AI to regular people, the first difference to explain was it's not a search engine. Please don't drop it like a supermarket opening hours next to me. something like this in a search query. um nowadays, two years ago, it was a little bit hard, right? that's true. But let me explain what I want to try to say here. We have used search engines for more than 20 years. now, maybe more. And now Google is moving this into the next era. And we will get something which is quite more interactive. And that means... it's maybe not so well known and needs some time until it will be intuitive. It sounds good. I really like to get the summary of the first steps I need to do for some certain parts I was looking for. So I get a short description about the next steps I need to do, for example, well, I don't know, create... create a Docker container or something like that. So I don't have to open a website anymore. I still get the references, I see what's going on, where the ideas are coming from. So I get... There is still some real content behind that. But I don't know how it works for the person who wants to publish blog articles, news or whatever. And Google is crawling everything together, bringing it into the search engine and giving you quick answers instead of the results. And you can verify, I know this person. I've read another article of this person. I trust them or I trust this person. I've seen a source I like because I know they write the style I prefer or something like that. And now we get... Something which is preselected already and not something which I can choose from. doesn't feel like... From my personal point of view, I may be at the age where I get some old man yelling at the window and, don't change anything! I don't know if it's this or I... I don't know. It sounds good. to get some personal driven, AI driven personal information that sounds good in the first place. And then they get shivers about they know everything about me. I don't, yes, but now someone can give me some more context, right? So. It's scary, but they did it already. So there was already that analysis, analysation of your account profile. you don't need to be scared about that. So, but coming back from your introduction of your talk, you were like saying that it is not intuitive and I'm going from the perspective of ease of use and I would say, yes, it is going to be more intuitive because Before you had to learn on how search terms are going to be used. Like how do I include a whole phrase instead of just a word for Google? What keywords do I have to use that I will find the right thing? And now it's going to be an assistant you can walk over with your natural language and you don't have to think about any highlightings or stuff around it'll understand your search term a little bit more. I will optimize it and then find maybe the better result in an easier way. problem I have there is the distrust that the results are okay. My experience is that sometimes I'm reading it and then I'm going to say, I'm I'm not 10 % convinced. I will have a look at it. And, that's more my issue. And as well, the other thing that I don't want to have my search history involved into that. I want to have the control about on how the search engine is behaving instead of it is more likely the chance there that it's going to advertise me something which it knows that it is on my agenda. oh you said something. I've heard about a story where someone was wondering why they get some advertising about topics they never searched for, never did before. And they figured out it was because someone else was using the same Wi-Fi and they searched for those topics. And then just imagine someone is using for a birthday gift for you. someone, you are looking up for something, maybe something related. And then they come up with, hey, remember yesterday evening at 2 a.m. you have been looking into something this. Will you continue there? Let's see what the future will bring to us. It sounds a little bit scary. I like the idea behind that. You don't have to be this person using just the specific words you are looking for and try to figure out do I need a plus, do I need a minus, to specify the query actually. It was not a question, it was a query. to get the search results and see how good it will be with the intuitive way to work with search engines. What I really like is the development of Google search. It felt like there was just a minor change over the last five, six, seven, eight years. And now we get real... AI in front of everyone. that's from the, let's see how the society will react to this perspective. Really interesting. I mean, I haven't brought up the topic yet in this episode, but we need to talk about AI browser. The browsing of the internet will change a lot. I've read a couple of things the past week and I'm really going to try it out and look into it, into deeper details, but you won't like it that much then. Yeah, sounds scary already. And I'm good to try something out, but please give me the option, the alternative to turn it off. Yeah, they claim it's not a human browser anymore. It's an AI agent browser. stay tuned. So the third project or the third topic we claimed or figured out at Google I.O. 2025 was a project Astra and Gemini Live. So that there is happening something to this real time multimodal AI assistance. So Project Astra is going to evolve to be Gemini Live and will offer a real-time AI assistance by integrating the camera, voice and web data, which you maybe have seen already on merchandise on YouTube, TV, wherever. So when the guy is messing up the sauce and it's going to be a cookie at least with the assistance of Gemini. That's coming to be live now. So the multimodal approach, which enables the AI to understand and respond to complex real world scenarios. So it's just the guy is talking to Gemini, Hey, I messed up the source. What can I do with it? And Gemini answers, do you have this? And then add some flour and mix it up. And then you're going to have some nice cookies at the end of the day. it's. something like that or a girl is going to is that the washing saloon or so we're washing saloon for for clothing and sees a man pulling out pullover or something like that and that's shrink to a tiny size and and she's holding up her skirt and going to ask Gemini, hey Gemini holding up the camera to the washing label and says, how may I wash this and how may I dry this? And Gemini is answering on how to do that and stuff and so on and so forth. That's meant to be a life, Gemini life with real time AI assistance. that's, mean, yeah, we will have a 0.5 there. So it's going to be really, uh okay. I cannot hold it anymore. I want to talk about point five because it's hardly connected here. Sorry about that. Do you remember, I think we talked in one of our earliest episodes, we have been talking about Google IO last year. And there was a scenario where someone had Google glasses or another product. I'm not sure if it really was Google glasses. Mm-hmm. there was a device, glasses, with a camera and a voice interactive assistant. And this person was walking through an office or something like that. And then at the end, the person was asking, I forgot my keys. Where have I lost my keys? Or where have I seen my keys at the last point in time? Wasn't that the Ray-Ban Smart Glasses? Yeah, maybe. I'm not sure about the glass itself, but it was definitely in the presentation of Google as well. And then this person got the answer. Yes, you have seen it on the table next to the plant or whatever it was. That was mind blowing. And I think now it's available. This is coming to us. We have it in our daily life and it's not only connected via glasses and that's maybe the development. You can use Gemini Live for mobile devices, for my understanding. And it's also working with integrated into your regular life. You just use it. as you usually would like to use the product. And then you can interact with that. Your examples and I think also the keys. But yeah, that's crazy. Yeah, so Android XR, smart glasses, or a game changer. Now you spoiled the name of topic 5. Yes, you're true. that was, that wasn't me. That was you. And I'm now going to give them the name that they can look after it. So it is Android XR smart classes and it's a development with, with Samsung as well as Warby Parker. And they want to aim or they are aiming to seamlessly integrate AI into the user's field of vision. and they want to provide real time information. So that idea is not that new, but combined with LLMs and overall AI, it's getting now to be a game changer as the models got that good in quality and can handle much more complex situations like before. And that's really a tremendous game changer here. it's really, I'm looking forward to will come there. I'm still avoiding to wear that glasses or going to buy one of them. The one thing is that, I mean, would you be afraid now if someone standing up in front of you and wearing some glasses? What will be the indicator that you're now on filming? mean, will there be one? Is that then becoming a common and usual situation or how do we I'm not sure what we'll do people who cannot afford something like that. And really that feels here. I don't have a good feeling for this at the moment. Yeah, and I think that's definitely a point we have to consider about this very specific topic. One point is every time analyzing AI helping us in our daily life, which is impressive, I would say, from a technical perspective. But the other end is the... technology, put it into glasses. And then you raised some critical questions. I totally agree. But I also see some opportunities, right? So where shadow is always light or something like that. Mm-hmm. anyways. So if you think about someone, the old use cases we have seen a lot from Microsoft, other VR headset companies who produce VR headsets uh or AR headsets. Now, Apple is different. So the focus group for Apple is... They're stopping the program, aren't they? and customers. So that's what I want to say. there are rumors they will stop that. On the other hand, there are some rumors they will develop glasses as well. And then with AI. my gosh. Anyway. you mean Apple intelligence there. Unfortunately, I mean Apple Intelligence in this case. But the story behind that is actually we have seen, and I forgot the name, for Microsoft there's some AR glasses, are some AR headsets actually, not glasses, available since quite a while. It's HoloLens. The point is the name is HoloLens. And the idea behind that is you get something in a manufacturing area, for example, who can see the blueprints in the room to work with the blueprints or get data analyzed on a very quick and easy way compared with manuals. But you still have this huge headset, right? And now Google is putting this into glasses and you give someone in... I don't know, logistics, manufacturing, healthcare, something intelligent right before the eyes. That's interesting. That's really interesting. I totally agree with all the personal and private areas we have on a daily basis, like walking, going outside, shopping for groceries. This is something, well, I don't like that either. But for some business perspective, that's good. I like that. Yeah, I'm still having issues with it. I don't know the name of the film with Tom Cruise where Ball is claiming the next suicide or murderer. um Minority report. And that was so scary on how they determined who was passing by the the walls and is showing up then the dedicated marketing. No, I don't want to think about that. Let's go over the next topic. we had already Astra is becoming Gemini Live and real life assistance. So next topic is about VO3 and flow. So VO3, you remember we had it a few, a couple of minutes before. So where it is going to produce video for you and now also audio and is allowing you to create videos with talking peoples where the dialogue will match to the faces moving, including sound effects and so on and so forth. So really cool thing. I don't know if we need to go deeper. to that. So you already talked about the democratization there and that it will be a disruptor on the media market. So I'm completely with you there. I just want to the flow part. There's a... I call it product. I'm not sure if it's really a product. It's something Google named definitely. So I think it's a product. Anyways, flow is something for easy editing and working with the data you created with VO3. So that's something which can help you to improve the already great video content you get from VO3. Yes, absolutely. Are you open to spend 200 or 300 dollars to get your hands on Vio and Flow? Um, when I got it right, you can have view with that 20 bucks or 22 bucks. Really? Sorry, please be patient. I have to click something. you can click on try and flow and then create with flow. And then you get the smart message that flow is currently not available in your region. Okay, then I just have to change my... It's not a problem. It's not like Apple Intelligence, which is blocking everything because you have a lot of data where I actually am. It's more about where my browser is. So it's just Google. Okay, so when we look at Google I.O. Well, impressive, I would say. from my point of view, Google is something coming back and is on the AI perspective and maybe the whole ecosystem about mobile devices. Really something you have to consider and put into your strategic consideration if you talk about AI using AI. and to use it when it's integrated in everything. yeah, user-centric, we may get a new era of Google in the next couple of months. I wouldn't say years, definitely a shorter period of time. Absolutely. it's really emphasis on multimodal capabilities, real-time assistance, user empowerment through AI tools. That's reflecting a future where technology adapts seamlessly to human needs and context. So I'm keen to go and to see what's going on there in the next few months and years. Yeah, brings us to the next topic then. More or less a funny story if we want. Well, we can say it's a funny story, but I think some security persons will not agree with us. So the headline is, AI agents outperform human hackers. And I know from a couple of companies that the new issue is not only... something like phishing, only something like social engineering by itself, but it's using powerful tools like AI to improve the hacking capabilities and... Well, the recent cybersecurity competitions showcase that AI agents can really outperform human hackers. Elite human hackers. Elite human hackers, yes. So now the script kitty, which is able to write something to start a DDoS attack on 127.0.0.1. Please put something in the comments if you put 1337 in the comments if you got the joke. That's another one. Anyways, that's incredible, right? Yeah, that's right. These AI agents use reinforcement learning and generative, advisable models and continuously adapting their hacking strategies and techniques against the goal. I mean, imagine this is a digital cat and mouse game where machines are fighting against this and that will... evolve on the speed of machines. that's really incredible. it's really making me afraid. one or two episodes ago, we were joking that 512 kilobytes encrypted stuff will no more be safe in the far future when we're looking at quantum computing. And now we're talking about agents who are hacking and breaking into systems and when it comes to humans interacting with systems there is a fault tolerance built in and I mean We don't need to bring that further, but I mean That's scary, actually. We are both Microsoft MVP's, right? I don't want to put it all to Microsoft, but to emphasize the cloud security instances we have from different environments, I would say. But I know it from the Microsoft perspective. There are so many different ways how Microsoft improves the security, learn from different tenants, environments, and brings everything together to secure your environment, and also relies on big data on AI supported security. And I think you need something like that in the near future, not only this typical, we have a firewall, we are secure. This is nothing we will help people have in the near future and it will help us efficiently now. Yeah, it'll be very dynamic scenario with an interplay of AI defenders versus AI attackers and technologically will be agility and continuous adoption, which will become very crucial there. And that's really not something you can prepare for. Yeah, definitely something you have to keep an eye on, because this will develop. And even if you're capable to disconnect your network from the internet, you're not safe looking towards a LIDAR T or other technologies where you can run in LLM locally. So that brings additional, efforts into security you have to do. yeah, it's very, very dangerous, but it's funny that human hackers, human elite hackers are not that good as AI has become already. And I'm not sure if it is a matter of time what an AI versus a human needs to explore code and stuff, but can be. So, and then something other strange happened when we look at all that AI stuff and the use of Chetchi PT and where it is used already that was in use that we that there were fake court cases. and that produced an illegal drama with AI. So controversially, Chetri Pichy recently became famous for inventing 129 fake course, court cases, where legal professionals, are on the edge about AI's reliability. So Another story which is really needs some points and has some points and needs some. view angles to be reviewed, Michael. Yeah, well, it's definitely another level of hallucination, I would say. This is something when AI gives you something which is a whole story, right? It's not a single fact. It's absolutely from the start until everything which was discussed there and the results. Well, that's... That's interesting and brings us to the still necessary clarification and verification we need if we work with AI. From my personal point of view, it's really interesting because I had a customer and the legal department was asking, hey, can we do something like that with Co-Pilot? And we said, there are some AI companies using real data for this and using real cases and court files and something like that. And they are better for those scenarios. But well, reading this, I'm not sure if it's really, well, yeah, the models are trained, they're specified, and then we talk about chat. Here is Chetjipiti. I thought, I got it, I got it. But in this case, Chetch GPT itself, well... Yeah, so that really brings a point up, right? So it's about retrieval and synthesizing data and how that's going to be done. And do you need to verify that or not to have plausible, I mean, cases, but fake references. So. Yes. Good. Imagine the effect of that when one got or have to go to jail because of one of that fake references. I mean, that's pretty awkward. So it means verification and prove the result of AI is still a thing and it's nothing you can skip off. mean, trust is somehow building up, not on my page, but on others. who are using it. And I have to say really, and I'm really, really emphasizing that all the time. So double check everything what AI is doing and is prompting you. And yeah, one more thing when we are at that stage already. So when we talk about reasoning and multi-step thoughts and all that stuff, please do me a favor and don't misunderstand. thought with human thinking. That doesn't have to be anything of that. It is just the most likely probability, which can also be false positive in a step, but the result can be at least the correct answer. So when you force an AI to think multi-steps and to make it transparent to you and you're going through that steps, Don't think, don't understand it as human thinking. Please treat it as well. It can be false in between, but the result is correct and so on and so forth. So it has nothing to do with thinking. is just the most probable correct answer for that step. Understand it more or less like an in-between prompt challenge as anything else. Well, well set. Thanks, I'm sorry. Yeah, absolutely true. I think that's... We can end this episode. See you next time. No, just... No, no, no, let's do the recap stuff, Yeah, just let's have a quick recap for our latest episode. So you can, if you just skip to the end and want to hear the jokes, right? Then you can go back and see what you have missed. Much aux. All right, so the first topic was Claude Four. That's latest AI model promising even stricter safety standards, safety of course in becoming the new battleground for AI innovation. Yes, then we had Google's Vio3 making waves, allowing users to generate millions of AI videos in just days. It's the democratization of AI, democratization of AI-powered media production. Then Google dropped a new app to run local AI models locally. And yeah, that's still crazy. Anyways, then we have major announcements about Google products, new Google products and integrations from Google IO 2025. And yeah, I think... Vio3 is one of the most important and Gemini Live are one of the biggest ones from my personal perspective. And then we have... XR smart classes is your biggest point, man. Maybe, maybe I already wear glasses, so it's maybe just a minor upgrade. If you see me with new glasses, you have to be afraid. yeah, I'm already afraid. I don't know yet about your glasses, Yep, last but not least. We have that, we have heard that AI agents are more into cyber crime than into cyber security by outperforming human elite hackers. And that they showcased on a competition that AI agents are outperforming like human hackers. And that means that we're entering a new era where AI isn't just, just protective and it's proactively offensive. And then we had a chat GPT's fake court cases where, where the legal AI drama was ongoing. means that chat GPT recently became famous for inventing 129 court cases, which were fake as reference for a court case. So where legal professionals are on the edge of AI's reliability. Michael. Well, that was a lot, I would say. A lot of interesting stuff. And I like the point that we take out some news and talk about this. And there's way too many news to everything. I like that we can talk a little bit about the stuff behind that. But for now, I would like to close our today's session. Shall I try again with this outro part? Yeah, well, I, yeah, we, we, we've a lot about groundbreaking announcements and yeah, you can go for the outro and maybe you like it the way we did it the past. So then you may go on with that. uh tuned, stay interested. Here we go, bye bye, take care all, thanks for listening. Bye.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

MVP Voices Artwork

MVP Voices

Ralf Richter
Talk Microsoft 365 Artwork

Talk Microsoft 365

Talk Microsoft 365