tdeshane's picture
Upload 4 files
7df43fb
The Final Data Science Happy Hour.mp3
Harpreet: [00:00:09] What's up, everybody? Welcome. Welcome to the artist Data Science. Happy hour is Friday, December 2nd. It is the final hours of data science. Happy hour. Man is the last one. The background. We got a little bit of Lupe Fiasco going on. I used to listen to this track to pump myself up before all the happy hours when when I was doing it for the first time. Just because being so public on LinkedIn that she was scary, man, that she was scary but did it anyways, man. Did anyways. And it was great. I haven't done many important things in my life, but I could say this is hands down one of the most important things I've ever done, not only for myself, but for all the people that have taken part in this and just seeing their careers grow and their networks grow and all that. So this is hands down the most important thing I've done in my entire life, professionally at least. You know, having a wife and kids and stuff, that's pretty important. But couldn't have done all of this without you guys. So shout out to everybody here that's been, you know, A-1 since day one. You know, I wish David Langley was here, man. I mean, back in the pandemic days, he was like one of the OGs. I was here all the time. But yeah, shout out to everybody that's made this amazing and amazing experience for me. I appreciate all you guys, Every one of you. Yeah, man. Last one. Last one. All right. Vincent Harper is 5 minutes away from a divorce.
Harpreet: [00:01:24] Yeah, I had to cover my ass there. Yes, Wife and kids, obviously more more important there. But no, honestly, man, professionally is probably the most hands down, most important thing. Just the network I've built and the network you guys have built in, the connections that just happened from people being here. It's amazing. So thank you guys for taking time out of your schedules every single week to join me for over two years. And I think the first happy hour was October, October something, 2020 was like the middle of the pandemic. And these rooms to be popping, man, we'd have like 50 people in here, everybody just looking for a connection [00:02:00] and wanted to hang out. And that was awesome. People from all over the world to man. I mean, it was like middle of the night for some people. And you guys were joining and I absolutely loved it. But we're here. We're here for one last one. Thank you, my friends, for joining in Big week this week, man, a lot of stuff going on. I haven't got a chance to play with the but open A's chat GPT is a is awesome I just I read some of these prompt I just have to read this because it was hilarious. I thought it was the funniest thing ever. So I'm going to read this this this prompt here. Somebody had prompted GPT chat GPT to write a biblical verse in the style of King James Bible explaining how to remove a peanut butter sandwich from a VCR. And here it goes.
Harpreet: [00:02:44] This is GPT response to that. And it came to pass that a man was troubled by peanut butter sandwich for it had been placed within his VCR and he knew not how to remove it. And he cried out to the Lord saying, Oh Lord, how can I remove this sandwich from my VCR for it is stuck and will not budge? And the Lord spoke to him saying, Fear not my child, for I shall guide thy hand and show thee that way. Take that butter knife and carefully insert it between the sandwich and the VCR and gently pry them apart and the end. With patience and perseverance, the sandwich shall be removed and I VCR shall be saved. And the man did, as the Lord commanded. And lo and behold, the sandwich was removed. And the Lord said, Verily, I say, and to be seek not to put the peanut butter sandwiches in the VCR for it is not a suitable place for such things. Rather, keep that sandwich in the refrigerator or on thy plate where they belong. And the man heeded the Lord's words. And from that day forth he kept his sandwiches in their proper place and was safe from trouble. And, well, amen. I thought that was the most hilarious thing ever. Gpt chat GPT with the jokes. Also PyTorch version version 2.0 released this week. That's huge. What else is going on, man? What else is popping off this week? Kenji man, good to see you done globetrotting even even bouncing all over the world. Man. How are you been?
Speaker2: [00:03:58] Yeah, I think right you can [00:04:00] make me check out for breathe. This is some monumental moment. I know. I've been. I've been really good back at home in Hawaii for a couple of weeks. It was a lot of travel to Italy, Spain all over the place. But I'm happy. Things are kind of calming down now. And we can I can take a little bit of time to celebrate, I guess maybe the the the end and the and the new beginnings here.
Harpreet: [00:04:28] Yeah. Yeah, absolutely, man. Monica, good to see you again. How are you doing, Monica?
Speaker3: [00:04:35] I'm really good. How are you?
Harpreet: [00:04:37] Oh, great, man. Great. Just loving it. Loving God. You know, the next baby is on the way. Coming in just, you know, a week or two, literally any. Any moment. So that's about to get hectic. So? So. Yeah, man. How you been? What's new with you?
Speaker3: [00:04:54] I'm so working for myself. Full time nerd nourishment. Doing, like, event reviews, putting together some stuff in the future. Kind of playing around with some ducks this month. I got a duck Advent calendar, so I'm just playing around with different technologies and such. Gearing up for the new year.
Harpreet: [00:05:18] Yeah. That's awesome. Joe and Matt, good to see you all here again. What's going on, y'all? Also shouting, buddy. Yeah.
Speaker2: [00:05:25] Good to be here.
Harpreet: [00:05:27] Yeah, it's dual mikes. I love it, man.
Speaker2: [00:05:29] Yeah. Yeah, that's right. We've improved the setup over time. Yeah, things are good, man. We're just kicking it. Matt's going to be on the East Coast tomorrow, so. Yep. So in New York. Hit me up. Anyone else coming to Ethan? Aaron's happy hour Wednesday.
Harpreet: [00:05:44] Oh, Dan in New York.
Speaker2: [00:05:46] That's a big fat now. All right, cool.
Harpreet: [00:05:48] Wish I could be there, man. Shout out. Shout out to everybody else in the room as well. Coast of Eric. Matt Blaze in the building. Matt. Blaze. What's going on, man? Good to see you, David. Fair. And I'll be [00:06:00] Balaji. Good to have you all here. So let's let's kick off the discussion, man. Vin, what's going on? Listen, you Vin's always been my go to guy. I'm gonna go to one last time for the happy hour here to kick off some discussions. Go for it, man.
Speaker2: [00:06:12] Oh, I wasn't ready. I thought. I thought somebody else could be take. All right. Oh, what's new? Yeah, that's interesting. That's kind of new. Meltdowns. New. That's well, not really new, But if I think the guy just taught us yesterday or the day before yesterday, if your company melts down, do not go on live television with a really smart interviewer. Fine. Like the dumbest interviewer you can. If you're going to do an interview, do that one. Because I have a feeling that's coming for some data science companies where I don't think we will lose people a ton of money, but we're going to have some ethical challenges coming up. So yeah, if that's one of you, if that ends up being one of your companies, don't don't do what he did, that that was bad. I think attorneys call that incriminating yourself, especially Andrew Ross Sorkin. I've actually met the guy before. He's actually really smart. Yeah. Not the guy you want asking questions in that way. So.
Harpreet: [00:07:14] What what was what went down? I haven't I haven't seen this or heard of it. I've been kind of disconnected from the news. But what happened over the last I mean, I know about the small town, but this particular interview was some chemo.
Speaker2: [00:07:27] Yeah, he went on CNBC at the New York Times. I think it's pitch book or something like that. It's what they call the event. And he decided to pick the smartest person on earth, basically from a financial standpoint to interview him and somebody who has maybe a negative one tolerance for BS and just and he came out with no idea what he was going to say, except I had no idea what was going on. It wasn't me. It wasn't malicious. I should have done better. I was just an idiot, [00:08:00] you know, because he allegedly stole like $600 Million and yeah, it looks bad. And he had it looks like a shell. Companies were involved in offshore accounts and people are calling him the new Bernie Bernie Madoff. So it's never good when the guy who interviewed him was the same guy like interviews Warren Buffett at the shareholder meeting and also, like wrote the book Too Big to Fail. And it's like a very popular financial columnist. Like not the guy you want to be. Actually, he's a perfect guy you want to talk to if you want to get busted. So that's awesome. So I swear this was a setup. It had to have been so somebody a little bit more funny if it was like carted him off in cuffs, like, right after the interview. Okay. All right. So, yeah, it's been fun. Cb a chat bot and not a person.
Speaker2: [00:08:51] At this point it's probably just some like homeless dudes, like curly hair. That's it's actually him. Anyway, yeah, but what you're saying is it's going to be you're going to think tech companies or A.I. companies are going to be doing the same kind of song and dance. I think we're like one or two years away from a few perp walks. From just security standpoint, that's where we're going to mess up, is we're going to oversell something to somebody who is powerful enough to bring accountability, because that seems to be what happens is you either lose companies money or you make like a hedge fund go under because you sell them your your A.I. technology and it fails catastrophically and people lose billions and suddenly handcuffs come out and SEC shows up. So I think that's coming for us. I think we're going to have a security breach here in the next couple of years that get somebody put in jail. And we're also going to have a very public meltdown with some oversold, I think. And that's going to be the next one where people [00:10:00] lose a ton of cash or something. Mission critical fails infrastructure wide and, you know, like a power grid goes offline or goes offline for a month or, you know, something critical goes down where somebody has to go to jail. And I think that's coming.
Harpreet: [00:10:17] Who would go to jail in that case? Would it be like an I c data scientist who is writing the code department manager? Like who? Who's responsible.
Speaker2: [00:10:25] Culpable, I.
Harpreet: [00:10:26] Guess, at that point?
Speaker2: [00:10:27] Depends how good the CEO is throwing people under the bus, because that's that's truly the the differentiating factor is if the CEO is high quality that throwing somebody else under the bus then. It'll be that other person. But more likely than not, I think we're going to be seeing some examples made of sea levels, especially startup founders. I think that's where it's going. It'll probably be a startup, not a big company.
Harpreet: [00:10:52] Like what type of startups? Like like startups that are leveraging maybe some type of generative model. Maybe they're building their startup on top of like GPD or you're just not I don't want to implicate anyone.
Speaker2: [00:11:03] You're asking, I get wild, but I can't do that. That's that's the kind of thing that, you know, they send lawyers after me for. I can't Yeah. Could actually say, you know, names of anybody that would have a public infrastructure contract, but that might be where you want to look.
Harpreet: [00:11:20] Yeah. Shout out to Greg Coco in the building as well as Keith McCormick. Christian Steinhardt, good to have all your heels here, Jennifer. Nadine as well, Sanker St of Austin and Eric Sims. This is great, man. It's like a family reunion. I love this shit, man. Excited to have all y'all here. Yeah. So I'm curious like, okay, so there's a lot of companies coming out that are probably going to be leveraging, you know, these generative models like GPT or stable diffusion or things like that. What's like the what's the. We're talking about infrastructure. I'm curious, what's like the infrastructure or MLPs look like in in that scenario? If anybody has if that question even makes sense, I don't know. Just kind [00:12:00] of refine at this point.
Speaker2: [00:12:01] No, I'm like talking about critical infrastructure, like your power grid, your water, these stuff that we would have serious issues if they went down Internet backbones, you know, company like Amazon's cloud infrastructure or any of the hyperscalers who use because you have to use something like that to manage anything that big. And if if a startup manages to convince a power grid that they're worth having and you should buy my AI, it will never fail and it'll optimize your power grid. Not say anybody's done that or anybody that may have done that is fraudulent. I'm just saying that would be the kind of thing that would get you put in handcuffs if it was power, water, you know, nuclear power plant, just any of those types of critical infrastructure. And there have been companies who are beginning to get into that space where they're using models to do power grid. You know, it's a load balancing, I think, or something like that. I can't remember what the early use cases they're pitching are, but that's the I think that would get you into a lot of trouble really quick.
Harpreet: [00:13:14] Cost them. Go for it.
Speaker4: [00:13:16] So at some level, this all comes down to responsible engineering, right? Like that. That shit doesn't go away. It doesn't matter if we're now in the artificial intelligence age or whatever, whatever you want to call it. Right. So what is responsible engineering and at what point, especially with things like critical infrastructure, at what point are you saying that stability is more important than optimization or stability is more important than maximization of some kind of profit or time or reduction of cost? The thing that I wonder is that how much are we reliant on, you know? Government based agencies that are working on essentially [00:14:00] monitoring and handing out these tenders. How much are we relying on them to actually know what they're buying as opposed to rely on the people who know what they're selling? Right. It's a struggle finding experts to work in government sector because it's a lot of people not as interesting, right? Like, oh, you don't get to do the AI or the robotics. You get to manage five tenders and figure out which one gets to do the AI and robotics while you sit there with all of your expertise and years of experience and, you know, don't get any of that fun cake. I think that's that's an interesting struggle to to have, right. Like, how do we make that job more interesting or do we just rely on the common sense and the engineering discipline of people that are in the private sector that are designing all of these things? Yeah, I'm not sure, but at the end of the day, responsible engineering doesn't really go away, does it?
Harpreet: [00:15:02] Greg, Go for it.
Speaker5: [00:15:06] Well, I wanted to. It's probably going to change gears for a little bit so it can. You want to build up on what what is being said. I'll let you go first and then I'll I'll come back.
Harpreet: [00:15:17] Ken, go for it.
Speaker2: [00:15:18] Yeah, sure. Real quick. I mean, I think some things like this have already really happened. It's just that. There weren't necessarily massive legal repercussions. If we look at what happened with Zillow, I think it was last year, earlier this year, where they mishandled how their entire machine learning infrastructure is is designed to work. They also didn't account for how a black swan event would impact their entire business. If we think about it in the term of what machine learning, what AI is not resilient to it is events that we haven't seen before and we can conceptualize with COVID [00:16:00] or with any of these types of things. And so I think it could be like an oversight. We don't understand what's going on, but it could just be something that is outside the realm of what we believe possible. That could break a lot of things that are in existence right now and cause really negative, dire consequences. So I think it's just interesting to look at it from that perspective, too. It's not just like bad oversight and the present or overpromising or whatever it might be, but it's also this idea that, hey, this again, a black swan type event could happen.
Speaker2: [00:16:35] And we don't know necessarily how a lot of the things that are out there will respond to that. Well, aren't you having like a volcano eruption over in Honolulu right now or. Oh, Honolulu. Come on, it's another island. I'm fine. What I'm saying is it's like, Yeah, but yeah, it could happen. It happens, you know? I feel like, yeah, we're ready for that. So hard to be ready for things like that. Exactly. And I don't know how you're in a different island, Right? What challenge are you in? Well, I'm. I'm on Oahu. Which is where? Honolulu, Hawaii. Like the big island is where the volcano is. No volcanoes on this one. Luckily, my girlfriend did go there today to go check out the volcano, which seems insane to me. So, ah, the pictures on Instagram. I mean, we live not that far from Yellowstone, which if that thing blew up, I mean, we'd just be like, really bad shape. Anyway, it's a Preppers of Data Science episode here, so it's awesome.
Harpreet: [00:17:29] Greg Go for it.
Speaker5: [00:17:32] Yeah. So, so I mean, I'm sorry for joining a little bit later. You probably have gotten all your kudos Harpreet for how you've, you've helped the community and over the past, what, two years, right? Two years. So, yeah, Congrats, dude. Like, I have this shirt for you today. That's why you turned me into. I'm a machine learning model.
Harpreet: [00:17:57] I love.
Speaker2: [00:17:57] It.
Speaker5: [00:17:59] Hopefully, [00:18:00] you know, I go. I go, you know, apply things, but what am I going to do? And did address me? And we know more of these meetups, right? Going to going to be messed up. But yeah, thanks, Harpreet man, it's been great. Like, especially for me, like. I talk about it sometimes, man, but I've been hit by imposter syndrome for years when it comes to, like, adventuring and AI or science and things like that. And you've made me feel comfortable. It made me feel okay to ask them questions. And we're going to miss that man. I'm going to miss that a lot. So. And you guys, too, man, like these familiar faces, right? I'm looking forward to continuing to talk to you guys. Then you get 10 minutes with Van. With Van or Joe, your mind, your head starts to get so big. You know, I talked to Ken like this guy showed me a paper that changed my life on how to read papers fast. Dude, I never told you that, but this is amazing. Eric With with Monica. I mean, you guys are all amazing men, sir. Russell But, you know, Costa, this guy is like a monster in knowledge. And, you know, it'd be good to kind of like, from time to time to keep in touch. But anyways, I digressed. Let me go back. So I got this, like. Funny thing, it might sound crazy, but I feel like there's something happening in the AI world, especially when we think about big machine learning models like GPT four is coming. Look at Chad GPT that just came out there. I feel like there's this thing and then you don't want to maybe think about that, which is do you guys feel like there's going to be this AI grid being formed where it's going straight to consumers? For example, it'll be where I have a utility bill every month where I can tap into any kind of AI feature I added on my phone or add it on my device and I tell it, Hey, be checking my pulse and tell me what's going on.
Speaker5: [00:19:56] And then it's like a 2% bill that is sent to me [00:20:00] every day. And then this grid. I feel like the big dogs are fighting to own that grid. It's like a power grid, giant power grid where you're supplying these quick to onboard Low-code no code kind of features on your devices and everything straight to the consumers because right now when you think about GPT, when you think about all of these big models, it's kind of like B2B type thing, like it enables smaller startups to make money with consumers but now is like, are the big powerhouses looking to launch these things straight to consumers where they can pay that small fee, where it's not hurting their pockets, but they become increasingly dependent on these small things to make them be lazy and not think about those things. We usually think about like paying my bills, like checking my walks, like why am I feet or my feet hurting? Things like that. Like, do you guys feel like this is where we're going with these giant models being born? Is are we going into the age of. Big machine learning model power grid that will support our everyday lives. Simple question for anybody.
Harpreet: [00:21:09] I want to jump to Serge on this first. But I mean, and then, you know, if anybody else, I guess, goes to Eric Keith, I'd love to hear from you and anybody else, really. But I would argue, man, I think we're already there. We just pay with our data. I think to a certain extent, I think we're already there and we're already paying for it. Like, I don't know if if like, if you guys updated the latest zoom. Right. Everybody raise your hand. You'll notice that Zoom will pick up and it'll suggest an action. And that's, you know, that's computer vision there. I don't know if you've updated the new zoom or not, but if you raise your hand, something will come up in Alaska. If you want to raise your hand, that's that's right there for us. And, you know, I think it's infiltrated. That's the that's the reason I really started.
Speaker2: [00:21:51] Like, hopefully that was a great prank. I really enough.
Harpreet: [00:21:54] No. Did not work for you. No.
Speaker2: [00:21:56] No. You made everyone raise their hand as well. I mean, I'm sure you have [00:22:00] to clap really, like, really aggressive, you know, space. If you make a super frown, it'll put a frown emoji.
Harpreet: [00:22:10] But yeah, I think we're already there. That's part of the reason why I think I started pursuing. Just learning more about deep learning is because we use it every single day without even realizing it. Meanwhile, people are just thinking that, you know, deep learning is not useful, it's not interesting or whatever. Like you can't get business value from it. There's all these companies like making all this money from it. It's useful stuff, but I'll pause. I want to hear from my search on this and then, then, then we'll go to Keith. I'd love to hear from you. And then Kosta started to go for it.
Speaker6: [00:22:39] I, I apologize, but I even forgot the question at this point. Gets you started talking about. You know.
Speaker5: [00:22:48] I can quickly recap for you, sir. I can quickly recap for you. Are we moving towards some sort of like powerhouses fighting to get a hold of this, what we call what I call, like the huge machine learning power grid, where that goes straight to consumers, Right? So you may say Harpreet what you said about Zoom, giving you the capability to respond to you raising your hand. That's probably a model that's controlled by Zoom. But are we going in the age where you're going to have a few utility powers, like, you know, like few companies that give you power to your house that will own this, you know, AI grid where you can tap into to pay bills every month and say or do you feel like we're going in that direction?
Speaker6: [00:23:27] Yeah, yeah. I think that could happen. Consolidation and in AI services, you know the companies you know like that already have cloud services. They have all these AI models off the shelf and these could fail and, and they're all interconnected. And of course they're being used by companies that offer third party home services, you know, like security systems and, you know, the very same systems that are already owned by Google and Amazon, [00:24:00] you know, that provide services like Alexa and what's the name, Ring and all those. Of course, they're already very intrusive. And they they could they could in a way. Well, not only their their extension of their power and they concentrate that power, but, you know, once they fail, people are going to lose trust in the technology. And if if they don't fail, they're just going to keep becoming more powerful. I, I don't think I like that, which is why I don't I don't have any of those things. I don't have an Alexa, I don't have a ring, doorbell or anything like that. I guess at some point people are going to start calling me like a mennonite or something like that.
Harpreet: [00:24:48] Keith Let's go free. Go to you. And anybody else wants to jump in here? Matt I'd love to to hear what you got to say. Just go ahead and raise your hand, not, you know, don't raise your hand here.
Speaker2: [00:24:57] But yeah, so I don't know if I have any kind of brilliant futurism insight here on Greg's idea, but I did it did resonate with me with something that I heard at MWC this this week. So I thought I would just share that. So I went to the. The keynote by the automotive VP. I don't know who else might have been in Vegas this week. It seems like all of humanity was in Vegas this week. But when I was listening to this keynote, I kind of figured that it would be autonomous vehicles and stuff, which is always an interesting topic, but they spent quite a bit of time on something that's very much like the Apple Watch or Smart home type stuff that Serge was just talking about. The idea is that pretty much all of the car companies are going to want to do software updates over the air, but also something that I feel naive that I didn't see this coming. That once you've got that happening because like Tesla does that now, I believe they do software updates over there, but [00:26:00] it becomes a marketplace because the car companies will want to sell you things through this, through this marketplace. And of course, the insurance companies would want to get involved because then they can give folks option of usage based pricing. So it's really interesting. And they spent it was a 90 minute keynote, so they had a lot of stuff to do, including to get clients to come up on stage. But they spent 20 to 30 minutes on this whole this whole thing. And it was really it was really something. Your car being turning into something kind of like your Apple Watch.
Harpreet: [00:26:41] Vince, let's hear from you.
Speaker2: [00:26:46] I think it's going to be kind of like in the streaming market right now where you have way too many different players and nobody wants to have eight different platforms and ten different platforms. I don't want to have to subscribe to Amazon, Google, Microsoft, Meta, Twitter and every other, etc. that you can think of, plus all the business ones. And you know it's going to turn into one of those things where. Just it's a cluttered marketplace. There's not going to be a really critical differentiator and trying to have all of them to get the one thing that each one does great is going to be really difficult to to get widespread adoption, because what everybody's what you're kind of talking about is a super platform or a super app. And that developed in China because there was one choice. You know, China sort of allows monopolies to to exist. But in the US we're just such crazy, fast followers that you're going to have as soon as one company figures out how to make some cash on this, you'll have 15 within a year or 18 months. And that's something that we do really badly, is [00:28:00] we don't do best in class, best in breed, and then go to market.
Speaker2: [00:28:05] We just go to market with what we have to be first. And that allows a ton of people to come in. And because nobody really has the best product out there or has enough consolidation to be the choice, the super app. So we have all these kind of pseudo competitors that don't really compete with each other on quality. They're just kind of hanging out and they're willing to take whatever they can get. But I think longer term where you're talking about, you know, just downloading and being able to use A.I., it's going to be good because we are really bad at looking forward in the future and figuring out if I do this today, what does that mean for me in a year or three years? And especially with health care, that's going to be a huge application. And if you tie that to things like insurance prices and, you know, other different not social credit, but almost like health credit, where, you know, if I eat this donut, am I good? I'm going to work out an extra end times, you know, whatever. Am I good? Can we do that? And having a lot of data about that, just being able to say, yeah, no, you can have a donut, you're good, you know, have a cookie, enjoy yourself or Yeah, that beer is okay or No, that's your seventh beer.
Speaker2: [00:29:23] What are you doing, idiot? And then there's consequences for it where you're looking at potentially paying higher car insurance or potentially having health insurance be impacted long term by repeated stupid decisions. You know, then all of a sudden the extreme behaviors and then personal freedom. So we're going to we've got a lot of iterations to go to get to the point where we have a good relationship with online social media. And I think we've got a long, long way to go with that. But there's going to be benefits along the way. We're just [00:30:00] continually going to have to churn through. Somebody throws something to market and doesn't think about how it's going to wreck a ton of other things. Then we have to clean up after it for two or three years and then the next one hits just as we're starting to get control of the last one. I think we've got a lot of these waves still to go Before we settle down into something that's a good relationship with technology.
Harpreet: [00:30:25] Then. Thank you. Just a quick comment coming here from LinkedIn, from Akmal Syed. If you guys are in Oakville, you should you should follow him. He's he's awesome. Great content. I did an interview with him a month or so ago. He says this happens with BMW. They sell heated seats on a monthly subscription. And I just found that to be absurd. That is absolutely that is absurd. Let's go to Joe and Matt, then. Let's go to Kosta, then Ken and then Russell. I'd love to hear from you on this as well. So after Ken, if you wouldn't mind just putting your hand up, so don't forget to get to you. Go for it.
Speaker2: [00:31:00] Yeah. It seems to me right now like these models are still too brittle for these general purpose use cases. At the same time, this is probably going to be some new start ups business model within the next couple of years or when there's a next big startup wave, the same ones that go to jail that we were talking about earlier. That's the next thing after their current startups go under. And then yeah, just like then I'm really interested to think about the long term ethical implications, right? Like we're just dealing with the ethical implications of Twitter and Facebook being such a part of our lives for the last. The big models remind me a lot of where crypto was a few years ago with a lot of the hype and I think a lot of the promise. It's obviously got a different flavor, but it does remind me a lot of the same sort of early enthusiasm with it. So, you know, so we'll see. I mean, unlike crypto, I don't think anything related to money just tends to attract like kind of scammy douche bags. And so, you know, hopefully that's not the case with these models, but don't count them out anywhere where there's lots [00:32:00] of money is going to be lots of dorks. So yeah, it's awesome. Couple anyway, then we'll have to talk about this in a couple of years now. Yeah. We got to remember this conversation and look back to it. Yeah, it.
Harpreet: [00:32:15] Is. It is recorded the last half hour.
Speaker2: [00:32:17] We have proof. We talked about that.
Harpreet: [00:32:20] Ronit on LinkedIn saying with him Web3 that could be a possibility where transparency is the biggest foundation and user control, how much data they'd like to share. Let's go to Kosta then Ken, then Russell. I'd love to hear from anyone else. And if you're watching and got questions or comments on LinkedIn, I'm keeping an eye out, so let me know. Kosta Go for it.
Speaker4: [00:32:41] So, so one of the things within the medtech industry, right, is you have this thing called regulation, right? You have TGA, you've got the FDA, you've got all these regulatory bodies that are extremely stringent. So if I want to go and make a device that you have to wear invasively non-invasively and have to use that for some kind of medical health reason, I have to go through a bucket load of scrutiny, a bucket load of scrutiny. I'm talking like your first first one could take a couple of years to get through all of the regulatory requirements. Right. But the crazy, wild, insane thing is those are only for acute parts of human health. If you really think about it right? Whereas I could build an app that manages sleep or claims to manage weight loss or claims to manage all of these other systemic things in our lives. And this comes from a like cereal user. I've used every sleep app under the sun. I've used a bunch of different fitness and weight loss apps and all of these things, right? But none like how much of them are actually as good as they purport to be. Right. It's very difficult to really nail down. Okay. Which of these are good? Which of these are [00:34:00] absolutely bogus. Like, I've tried some sleep tracking apps that I look at their their summer graph and I just look at that going, that's B.S., right? Like, there's just no way that that's the summer graph that's coming out of an overnight sleep like that.
Speaker4: [00:34:12] Right. And there are others that are reasonably good for the technology that they have at hand, Like there's little technological limitations on what you can do. Now, that degree of data that we're playing with. At what point do we realize that we're playing with fire when we're talking about behavioral things that, hey, this app got me into this behavior that got me to do X, Y, Z, and that's creating all sorts of health complications for people down the track, Right? If we're not sensible on how we build these apps, if we're not well informed on how we build these apps, we can actually have a net really negative impact on the wider health implications, right? Because sleep and weight are huge. Like, you know, if you don't get that right, there's huge co-morbidities later on in life with all sorts of things like links to Alzheimer's, links to all sorts of other stuff. I'm not a medical expert, but you know, at the end of the day, they don't go through any of the scrutiny, any of the, you know, regulatory zeal that you would otherwise see for things that are supposed to be impacting people's health, yet they're impacting epidemic levels of health, like entire demography, you know, scale of things. So it's a little bit crazy to me that that doesn't happen. And I think we kind of come back full circle to the discussion we were having before, right, is how do we get like I see health, like human health as essential infrastructure, right? Like at the end of the day, it's as essential as, say, electricity or gasoil or water, right? So I've been meaning to check this one at you guys.
Speaker4: [00:35:46] Is there is this concept of charted engineering. Right. And it's kind of really only really alive in the civil engineering and the power of electronics, like [00:36:00] the power electrical engineering thing, where you cannot be a civil engineering firm building a bridge or a building without having a chartered engineer to sign off on the site plans. Right on the design, you need a chartered engineer to sign off on all of these things now. Is that something or is there something similar? Obviously, that's a slow addition, like it slows down the speed of releasing software, but is that something that we should start to have an appetite for? Is there another way around it that we can do without without losing the agility of software? I just yeah, I don't know. Is there a value for that in our world, particularly with AI? Right. Like software, you can build software that's so innocuous and small that you maybe don't need it. But when it comes to AI, when we're starting to see second order and third order effects, it makes me start to think.
Harpreet: [00:36:58] Ken, let's hear from you. Shout out to everybody else that just joined. Ben. Good to have you here. What's up, Ben? Yusuf, JT, Patrice, Eric, Antonio. Good to have all you all here. Go for it, Ken.
Speaker2: [00:37:11] Because I don't know if I can answer your question. I was going to take it a slightly different angle, actually going the opposite direction up the chain. And when we talk about how, for example, like an app market or a service market market, it's very fractured. Right? The thing to me and I think the thing that's most interesting to people who are are actually going to truly change our future is the platform. Right. So you look at how Apple, they they basically dominate the platform of apps they control the faucet of where people can can go in and look. And I think we see it in the news very clearly with Facebook and Mark Zuckerberg realizing that he truly missed the boat when creating the initial platform to be able to disperse these things on. And so I think you look at [00:38:00] the metaverse, for example, his his work with that is to control the platform where all of this is dispersed. And I think that's going to have such a massive implication on if there is one platform or a unifying platform that controls everything that goes out, all all of these services together, whatever it might be.
Speaker2: [00:38:18] That's going to be interesting when we talk about the not that the other things weren't interesting, but that's like the scariest and I wouldn't say the most looming threat, but it's the most complex issue. I mean, you look at the car example that was brought up before each individual car company. By definition, they have their own platforms to launch services in a different sphere, where we have phones, where we have whatever, like headgear, metaverse devices, that's still evolving. And if someone completely controls that. That essentially unifies a lot of power, all of these advanced technologies in a more singular place. I don't think we'll ever have the, like, software type of monopoly like we do in like like we see with WeChat in China. But I think a hardware type monopoly can enable some of the really scary and looming effects of a more software monopoly. So I think I don't really know where that's going, but I think it's important to bring up and think about as well as as we approach this in the future.
Harpreet: [00:39:26] Russell, let's go to you and then Russell. Let's go to a Ben.
Speaker2: [00:39:33] Thank you, Humphrey. So, jumping back to Greg's original question, I've been very surprised. We go to one master model that's going to feed everything simply because there's so much competition in business. I think people want their own version or their own section of something, and they'll market that. Their model is, you know, five points better than another model for [00:40:00] various different reasons. And they'll try to target this to different parts of society. So they'll say, you know, if you're if you're young and hip, you know, you want to go for this model rather than this model because this is for the boomers and all of this kind of marketing language. And moreover, I think the biggest problem with business is that far too many businesses prioritize profit rather than service or product quality. So money tends to be what drives many of these decisions. And I think the same is likely to happen in all business, including the newer businesses, the development of of AI and ML models, etc.. Then moving on to some of those points, I think yeah, certainly some of these I made a couple of comments there. As you know, smartwatches have been great, but they're not perfect. I have a smartwatch and I put some sleep apps on it. I agree with you guys. I think it's more hokum than not. And I'd put that up to the 90% hokum there. You know, I, I sleep well sometimes, but mostly don't. And I lie awake at night and just stay still and try and almost meditate so much. And it tells me that I've had REM sleep for five cycles, though I sleep, I No, it's not. I think there's just some kind of a sequence there that if you're still for a certain amount of time, it splits things up and tells you you're doing things you're not.
Speaker2: [00:41:29] And I've not found one that persuaded me. Otherwise I don't use them now, but also some moral and ethical dilemmas for smartwatches, you know, because it tracks your movements and the things that you do. So I made one point that if you sign up to an insurance policy for anything and you don't make a declaration, so you made a declaration that you're in perfect health, but your watch picks up, you've got some minor heart murmur and then you need to make a claim. And the insurance [00:42:00] company can can access your data that's provided from this smart wearable that goes to a service. It's Apple Watch or Samsung or it or anything like that. If they can access it and then use that to refuse to pay out. That to me is an ethical moral dilemma. And I'm making a a significant difference in my assessment from someone that is willfully trying to misuse an insurance policy than someone that legitimately does not know they have an issue. It's not been diagnosed. They were honest at the time. That to me should not be an issue. But I imagine insurance companies wanting to maximize their. I probably would investigate that as a routine. So there's some severe moral and ethical dilemmas there, I think. And then rounding this back to Greg's question again, I think a single master model could probably help prevent that, but I don't think that's going to come anytime soon. I think we're quite a long way from that because of business, competition, etc..
Harpreet: [00:43:04] Russell, thank you so much. So Greg's question was about this this future where, you know, only a few big super power companies can build this AI model power grid, So similar to like, you know, how we have utilities and services right now, water service, telephone service, so on and so forth. Is there a future where we have A.I. services where we just tap into a little bit and then kind of pay to use that? Greg Hopefully.
Speaker5: [00:43:31] Yeah. So it's kind of like direct to consumer and, you know, I can afford to subscribe to a few to gain societal advantage, right? Check on my health when I run, when I'm at work and I'm stressed and it's guiding me and I pay a monthly bill on and I become increasingly dependent on these kind of like little features that I can on board, on my devices or sensors that I put on myself and things like that, where only a few power grid [00:44:00] service suppliers can supply these, where they're skipping the line. Like, right today you see big models like GPT enabling other start ups that then create services for either other businesses or consumers. But I'm thinking about this power grid that goes straight to consumers like you and I, and we're just becoming independent, like increasingly dependent on that to survive, to be to have societal advantages and things that I do see a future like that, then I guess that's what my question was in. Mark, I'm happy to see you too. I'd love to hear your thoughts on that as well. Yes, it's dystopian of The Matrix for Erik.
Speaker2: [00:44:38] So so Greg, I think this I think there is and I think the future maybe I'll just speak to my building a monolith from the business side because I think it has to exist on the business side before it goes to the consumer or to the home side. So a fun maybe a bit of a provocative question or a fun question to ask this group. Who is the smartest data scientists in this meeting right now? And like, there's a lot of really, really smart people here, some pretty profound people with a lot of experience. But the point I'm trying to make is if we actually pooled all of our experience where we are, we're a team of 26 people. I could probably bring in panels of people to find out that we have gaps. Not not an individual like we as a group have gaps in data science loss functions. We don't know methods, We don't know failure points we don't know. And so what that means is you will have a business that becomes a monolith. The open source won't be able to keep up with a number of failure modes or the shared knowledge. And so it will be a business that's collecting all of them. And so I think we'll start moving away from building the build it yourself, which most of us on this call we lived during the build it yourself era.
Speaker2: [00:45:50] And eventually, when it comes to time to value, that will continue to be accelerated to the point that I like to talk about Jarvis for everyone. So in the future we [00:46:00] will have Jarvis for everyone, and that will be a very helpful thing to have at home. And some people might think that begins to sound dystopian. One fun theme to think about. There's some complicated ethical discussions around personification or proactive learning. So one example would be my kids are me. As a parent. I'm frustrated that it's so gross. There's people eating in the TV room again in my home, knows that I'm in charge. My wife and I are in charge, and my home suggests that I can turn off the TV the next time that happens, which is actually a really fascinating thing. So the home suggests that that means the home is proactive enough to know it can collect the data or it's already collected the data and it can execute on a model. There is no data science like it was all no code, I think escapes to automatic speech. But maybe Gregg react to anything I've said or I think the key thing I think about is experience. Capture becomes a black hole where no individual data scientists, even today, they're already losing the fight with experience capture with these bigger a companies out there.
Harpreet: [00:47:09] Mark, let's go to you. Good to see you here, Mark.
Speaker6: [00:47:12] Let's good first of all, excited to be here. And just want to highlight how amazing I think you created and carried on for a while. That's the last one. But I know you're doing me doing greater things going forward. But to respond to to Gray's questions and Ben's comments, I recently went to Transform X Conference Live, and I can probably probably find the episodes, the talks on YouTube, but they had a lot of the people who were like CTOs or CEOs of like Google and like open air and all these things. And the argument they're making is that we've always had these big models. The thing is like, when do we release them to the public because there's so much responsibility and ethical concerns regarding that. One thing [00:48:00] that they describe kind of what a future is. And that's why I kind of jumped in because like Greg had a really nice analogy of kind of like this I kind of power grid. Is that similar to what we have for like the App Store or the Google Android store where you have this platform and people build businesses on top of that. We're going to be seeing the argument they made is that we're going to be seeing that these massive AI platforms that no one really compete with just how much compute it takes to build them and stuff like that are going to be platforms that people build on top of. So that was an interesting thing coming from that talk. I mean, it's well beyond me to even have like, here's my opinion on how the future is and more so repeating what they're saying in those talks. But it was interesting to hear lot of those leaders saying that it wasn't that this happening. Now it's more so they decided to open this up now, which is a slightly different thing, knowing that like with these large companies, they've always had these kind of capabilities as a matter of how they commoditize and make a market out of it.
Harpreet: [00:49:01] Yes, Listening to the Machine Learning Street Talk podcast, I think it was the most recent one that had Aiden Gomez, CEO of CO here, one of the authors on Attention is All You Need. And he was talking about just that, trying to build a platform where people can build build on top of. Thank you very much, Mark. We'd love to hear from anyone else, man. Anybody else wants to to jump in here, please let me know. Anybody else got a question? Your comment?
Speaker5: [00:49:27] I got I got a follow up question for for Ben. You put something about Jarvis there and it's in It's interesting, right? So you talk about like everyone will have a Jarvis at home. You know, my concern is that how how affordable, how accessible will it be? Right. Like, typically, you see when you see technologies like that, it's always like available to only a few. And then over time it becomes more affordable. Then you have this lag in terms of like people have access to new technology that can help them improve their lives. So how do you see this kind of [00:50:00] commercialized in kind of like on an even playing field without affecting pockets of society versus affecting the most, you know, things like that?
Speaker2: [00:50:10] That's a that's a good question, I think. Jarvis For everyone, I see that as being relatively affordable, almost like Alexa. But the the fun one to think about is the home droid. So the home droid will start out being extremely expensive, like it's $1,000,000 and this will be the home droid that'll do your dishes. But just like the iPhone, version's coming down. The old version of the home droids used maybe more accessible in, but eventually you might get to a point where you can have a home droid that is less than 100,000 to do things to quickly pick up clean, walk the like it. There's some very weird realities we could get into because yeah, hopefully that's a bomb into the group to just open up. What is it okay to have a home droid? What does the is can the droid be stateful with memory? If it's stateful with memory, that means your your kids will form an emotional connection with it that Hey, Sally, how was school? Sally was hard. Billy was mean again. Oh, that Billy like that. That becomes an ethical concern because what does the Droid say and how does that influence your kid's development? And just like you take a dog back to the pound, do the kids grieve you upgrading the Droid? You know, if I don't know, is that question too weird for the group?
Harpreet: [00:51:32] There's there's an episode of I.
Speaker2: [00:51:34] Guess Greg's question was Greg started the Greg started like sorry.
Harpreet: [00:51:40] There's the there's an episode of the Black Mirror show where it was that like a home assistant, a personal home assistant that lived inside of this like Alexa type of device. But the home assistant was actually a upload of that person's psyche. So it was like that person's digital clone [00:52:00] that had all the same thoughts and feelings. And that's why that home assistant knew that person so well. Trippy episode. I'll see if I could figure out the the title or.
Speaker2: [00:52:09] It was also in Silicon Valley. The billionaire had a robot that would talk to his kid and put in a bed, but it was so the billionaire wouldn't have to talk to his kid. He's like, Oh, not my fault. You have to go to bed now. So I'd get it for that, I guess.
Harpreet: [00:52:24] Kozlov US go to you shout everybody else that's joining see everybody in the room. Just want to say hi to to the new people. Yousef is here. Good to see here, Yusuf. Yusuf as I've been liking his content. Good friend, of course, I believe as well um go for it cause the.
Speaker4: [00:52:42] Yeah. I think we we reaching too far for an example. I mean, I don't think we have to reach all that far for, for an example of this kind of thing. Right. Like take, they take it back to health care, Right. The latest and greatest in surgical technologies is available in certain places. And at the other end of the spectrum. Nowhere near. You're nowhere near. Right. Like and as we go, I think the difference is we like maybe a hundred years ago, 120 years ago, we were creating more fundamental technologies like steel, right? Like we were we were talking about steel, we were talking about train tracks. We were talking about things that by nature were lower abstraction. Right. And the ability to jump from. Essentially, for example, from data to insight. Right. To be able to jump from data to insight was a lot smaller of a jump back then. So it was a lot more accessible because more people were in a position to take advantage of it in the first place. Right. But now those kinds of abstractions that we're leaping forward with new technologies, particularly with AI and things like that, and the people that are enabled to use it early on are already at a head start. So you're going to see that gap in technology. So you're right, like health tech, for example, you create something great. Only a few people are going to be able [00:54:00] to afford it. Java is only a few people going to be able to afford it at first.
Speaker4: [00:54:03] Eventually, more people will catch up. But that I think the lag between the first what's called the people that are early to the system and then the people that eventually get access to it for accessibility reasons, that lag is actually growing as technologies are able to abstract more things and enable us to live in a in a more impactful, real world kind of way. Right. I don't know if that changes with the nature of software versus hardware, but when you're talking about home assistants like hardware, there is a real manufacturing cost that comes into it, whereas with software it's a little bit more ubiquitous. We're able to create it on Android phones at work, even on, you know, really low, low cost hardware. How much of it is linked to hardware? How much of it is linked to software is an open question to me. But the other side of it is that, yeah, I think the more that we enable, the more that you need to be in a position to actually make use of that. For it to actually be a viable early usage. I'm rambling a little bit because I've just completely sidetracked into that whole thing about how much of it is because we're hardware that we can't, like, afford something versus software, right? Like we've got mouse droids and one side of the world that I mean, I say mouse droids, I mean robot vacuum cleaners. But on the other side of the world, we're still using broomsticks, right?
Speaker5: [00:55:25] Yeah.
Speaker4: [00:55:26] And to be honest, we still use broomsticks after the mouse droids, because let's be honest, they're like 90% of the job, right?
Speaker5: [00:55:34] So. So, Ben, you said something about like. Like, like a potential for, like, your daughter grieving. Right? If you're deprecate this robot they've attached themselves to and I'm going to say something that may sound mean or put me in trouble, I don't know. But my wife is in health care and she talked about often getting assistance to older folks. And maybe this is like one of the consumers that could be [00:56:00] a success story for these kind of like assistance simply because the lifespan of an assistant is longer than the person using it. Right. So, you know, when the person passes away, you don't have to deal with like legal implications of somebody getting so sad because now their friend is no longer with them to help them cope through the day. Right. So maybe we've already found like successful use cases with these in health care for the folks who are at the later stage of their timeline life, where they can leverage these robots to cope with life until they pass on. And now you can get rid of any implications of your really killing their mental state as a kid. Now that you have to take the robot away, kind of like taking the dog away from them and now they're traumatized forever and things like that. I don't know what your thoughts are there.
Speaker2: [00:56:52] Yeah, I guess they want to be worried about grief, especially if we're on hospice or something. But one thought, Greg, is we already experienced grief with objects. When I sold my AMG Mercedes seven years ago, I was sad for many, many, many, many months. And so you can think about objects like that. You've had our own, like if it gets broken or stolen or something like you don't get over it tomorrow. And so I think there's different stages of emotional attachment that we can have to non sentient things. And this just takes us into a new category. But I think grief is part of life. All of my kids have had pets and all of their pets have died and shocking ways like, like some things. My my daughter's tortoise was attacked by a neighbor's dog and and we had to euthanize it. And so, like, that's life it's and there's this theme. If if you live a life with love, then you're guaranteed to experience the limits of our grief. Okay.
Harpreet: [00:57:58] Shout out to Mexico. Salma Kiko [00:58:00] here a second ago. Kiko, what's going on? Uh. Waikiki is definitely the smartest data scientist in the room, for sure. Imagine if all of us did actually come together and make a company or start working together. That'd be very, very interesting. Very interesting. Greg, great, great discussion kicking off there, man. Absolutely. Love it. Any follow up thoughts or questions or anything? Kenji said would fire him on the first.
Speaker5: [00:58:24] I would love to hear Michiko about that. You know that that that that dystopian things that we're discussing.
Harpreet: [00:58:30] So let's go to let's go to Patrice first. I had to.
Speaker5: [00:58:33] Go in Oh, but you sorry.
Harpreet: [00:58:34] And we'll let Michiko warm.
Speaker3: [00:58:37] I'm hoping you'll say a little bit more about what's next to you, but for you, Harpreet. But I'm wondering if that might be if it's not what you're planning a next thing. Like there's a there's a. Scott Page has this theory of diversity that he's explained, like those old fashioned Scantron tests where there's like, you put your answer in, then there's holes. And like when you have a specialist in something, they know all about their type of thing. And the idea is like, there's a diversity of knowledge. If you bring in people who have domain expertise, experience, expertise, all different kinds of like diversity in what their knowledge bases are. But wouldn't that be fun if if this were the last artist of data science, But there were some way to have a future form of that collective knowledge that you've facilitated here? And I have one question for you. If did you think like did you have a conscious approach to putting this group together or was it more organic? Like, I'm going to get a bunch of people together and see what happens? [01:00:00] How did you make this happen if you have anything on that that you want to share?
Harpreet: [01:00:05] Yeah. Scott Pace. Scott, you page is awesome, by the way. I've actually interviewed him on the podcast. There's an episode with Scott Page and myself. Go check that out. The model thinker, I actually it came together organically. I just told people I'm doing this thing, come hang out. Like, I think the first office hours was definitely Eric was there, Eric and Toshi. And we go, I don't know if you guys remember. We go, I haven't seen him in a while. And a couple other people, a small Carlos was there too, Carlos Mercado. And then like a few few like weeks after that, Marc came, started hanging out and then I just started inviting people like, Hey, I do this thing, you know, this is before I had like, creator mode and like access to, like, live anything. So everything was kind of like closed. And I was like, I just do this thing where people come together and we just talk, come and hang out if you want. And then people started coming and, and people, yeah, just started coming and showing up all the time. And I loved it. So yeah, it was very organic. I would say the way this all unfolded. I don't know if that answers your question, I'll pause to.
Speaker3: [01:01:11] Yeah, just, just one more thank you for me to not only to you, but to everybody who's been here, because our artists of data science kind of coincided with my first kind of starting to think about the world of data science. And it was really, really helpful to have a bunch of alternative views on things I was exploring and hearing. And yeah, I appreciate that this, that I was able to rely on this group to have interesting new, different things to say than other places and all in one place. So that's.
Harpreet: [01:01:54] Awesome. Well, thank you for being part of it. Thank you for coming. I could not have done it without you guys. Like, seriously, I don't know if I would have. [01:02:00] Like, I probably would like if people didn't come. Like, honestly, if nobody came to the office or the happy hours, whatever, like, I obviously wouldn't have done it. So you guys kept coming, kept showing up, kept having great conversations. And yeah, that, that just kept kept doing the man. Yeah. Mexico's saying, Mexico, come on, where are you at? I can't see like, where's there you go. And he goes saying that this help with quarantine loneliness. It definitely did. Mexico is the only person I knew before starting this like that I'd met like in person I actually knew. We go back now. We go back and I'm excited here to Mexico and Mark are here. We got some going on together. You know, I've been slow to move on that, but trust me, the ideas are simmering. I've got notes, and what we're doing next is going to be awesome. We're you know, I'll fill out that notion page for you soon, Mark, but that's going to be interesting.
Harpreet: [01:02:54] And I'm excited to focus efforts on that. The few hours that I have on Friday are going to be dedicated to that. And then the new channel that I got kicking off the Deep Learning Channel, the Deep Learning Channel will be just all about deep learning. And I'm starting first from like doing deep learning straight from just nothing but like Python and NumPy and just working at it from there. So from first principles. And right now I'm, I'm trying to figure out man in man I am that's the, that's like the, the software where three blue one Brown uses to animate his videos. So I'm trying to like learn that and trying to figure out, okay, if it is worth the time investment or should I just draw stuff by hand and and explain stuff, but I'll be checking that out as well. But yeah, shout out to me Kiko symbol, what's going on? Good to see you as well. How I'll do what you all up to.
Speaker3: [01:03:47] Trying to figure out. Hey. Hey. Sorry we couldn't join earlier, but we were doing a team meeting some. Would you want to say hi?
Speaker2: [01:03:58] What's up, man? Good to see you. [01:04:00]
Harpreet: [01:04:00] Good to see you, too. Man. Good to see you.
Speaker2: [01:04:05] Jimbo.
Speaker3: [01:04:07] So there's Joan, like Joan Matthew.
Harpreet: [01:04:12] He was watching on a small screen there. The the question that that that Greg had. I would love to see if you guys have any input on that. Greg, just kick it off one more time because that was a great discussion and I'd love to hear Mexico's input on that.
Speaker5: [01:04:30] Yeah, so, so so Michiko, I've had, I've had this like crazy dystopian sound sounding like theory about like this future coming up where, you know, a few companies will have control of what I call this power grid that goes straight to consumers nowadays. When you think about GPT, they go to, you know, they enable a lot of startups who build technology on top of it and serve it to more businesses or consumers. But now, you know, are we going into the ages of a power grid where as a consumer, I can just have a utility bill for all of the ML or AI based features that I download on my devices or tools to enhance my life, whether it's to monitor my feet when I walk or my my stress level at work or anything that I do in life where I become increasingly dependable on it. So all we or will we get to that point where a few utility companies or power grid utility companies will serve us these things that make us increasingly depend on them to do things in life? So what are your thoughts there?
Speaker3: [01:05:45] So I'm going to so I'm going to say my piece. But actually, it would be fun to also get some take on this. So let me just draw the example of utilities. So most people don't realize that, for example, solar [01:06:00] in the US, if you were to trace, like for example, wherever you live in the US, you probably get an advertisement for some kind of solar company, right? Residential, solar, it could be insert name that's got green or planet or sun or light or bright someplace in there. Right? So the way a lot of those work, for example. So you would think that it's a competitive market, but it's really not for a couple of reasons. One, energy markets are not competitive. So you do have like certain you do have like an oligopoly in terms of power purchasing, trading, supplying all that jazz. The second part is that like that is an industry, for example, where it could be different, but the average like sales lifecycle. So a solar panel, right, is X thousands of dollars. It's a pretty big purchase. And also buying and selling energy can be kind of complicated. So even though you have what seems to be competitive, local companies, a lot of them actually end up being subsidiaries of bigger companies or their sales and marketing channel partners. So for example, I worked at Sunrun, which is one of the largest residential solar companies.
Speaker3: [01:07:11] The Solar City is another one that people are very familiar with. But a lot of those companies, they seem to be competitors, but they actually have a lot of like integrations and and channels and partnerships that are not necessarily visible to like the common consumer or the consumer that's purchasing energy or buying panels. Right. So I think you'll get something that is kind of similar to that, only just quite a bit more fragmented or even if you think you're basically paying into a competitive market, it's actually not. But once again, like, you're not going to but you're not going to necessary know that unless you have domain expertise in that market or I don't know unless you like to read industry reports, which you must be all fun at parties. But no, I'm kidding. I think that's really fun. So, I mean, that's kind of my take. [01:08:00] And the other. And like, can you can you recommend this book to me? A long time ago it was about the globalization and militarization of like I, I forgot what book that was. But like the A.I. superpowers. And I think that's also another thing that you'll see. But Simba, what do you think? Do you think there will be a grid of like.
Speaker2: [01:08:28] So to make sure I understand the question, the question is essentially like, are we is all the kind of every AI functionality in every company going to kind of be dependent on like two companies models? Is that kind of generally what it.
Speaker5: [01:08:44] Is kind of like more like us humans, like consumers will be dependent on a few utility providers of AI features like low code onboarding. Like I can say, Hey, I want to purchase this little AI features here and embedded in my device, and now it's serving me and I'm paying a monthly bill for that service kind of thing.
Speaker2: [01:09:06] I mean. I think it will happen to an extent the same way that there's like a Google cloud, like Azure, like it does kind of consolidate around the big players because there's a certain level of scale you need to be able to train something like a large language model or any of these foundational models. Point is, is that most of those things kind of or input and I mean they mostly output embeddings, but people are just using for their own specific use cases anyway. So. I think it will happen. I don't think it's a major. Risk factor is issue as much as as it would be as much of a risk factor issue as being a centralized as it is. And like Baywatch goes down, like there's probably this Zoom chat would go away. It's [01:10:00] kind of my take.
Harpreet: [01:10:05] Thank you very much, Paquito and Skip. I'd love to hear from Yusef on this. Yes. If you want to jump in and let me know, I used one of those tips earlier today when I was coding. He said, Don't do four I in range land of something. He's innumerate and said. And I did that. I started doing that. Thank you, Yusuf. We'd love to hear your take on this.
Speaker3: [01:10:23] Also, quick comment. This is why we need more citizen data scientists, because they will be the anarchists of the data science machine learning world in the future. You train up more seasoned data scientists, you get more like badassery, like stable diffusion, my friend. Community developed and trained. Yeah, right.
Harpreet: [01:10:41] Right up. Yeah.
Speaker2: [01:10:46] Yeah. So I'm usually IRC. I look on LinkedIn but custom. I used to work with custom and our last job and he's like, this is the last song you have to join with. Like, okay, I'll join. So thanks Harpreet. Thanks everyone. I'm going to miss, especially my two favorites, just keeping it 100. Greg And then I want to miss the text from you too. Thanks. In terms in terms of my take on this, I think for things that rely on data like it might be the case, but for things that rely on compute, as the compute costs go down and it becomes easy for lesser players to have the kind of compute that the bigger players have. Now, it would be hard to consolidate things that rely on compute. But the data advantage, I think will remain and things that rely on data that's more likely to happen.
Harpreet: [01:11:55] Yusef, thank you very much. See if there's any other questions or [01:12:00] comments coming in. Ben's talking about a therapist. I think that'd be awesome. I would probably see a therapist more sooner than I would see a human one. Like it's been on my to do list this year is to find a therapist.
Speaker2: [01:12:14] Is that like, a version of, like, better help or something?
Harpreet: [01:12:18] Better help, huh?
Speaker2: [01:12:19] I've never heard of that. Oh, you could sign up online for different therapists. They have them on every podcast, so you should get. You could start reading like betterhelp ads in your podcasts. That'd be pretty funny. So, Joe, you could transcribe. There are all these Tik Tok and Instagram therapist influencers. You could transcribe all the advice they're giving and then you would just have that response like, There you go, you're done. You got to be your best self, you got to get out there. You could you.
Speaker4: [01:12:47] Could do that with professional coaches as well, right? You could also do that with fire with finance gurus on YouTube. Just scrape all the YouTube channels, create a model. You could come up with the ultimate crypto crypto guru bot, right? That's that's something I'd love to see tomorrow that would make my day.
Harpreet: [01:13:05] I mean, that could be possible, right? Aren't there isn't there like an extension of stable diffusion that can generate like movie ish scenes. I think I've seen something like that or heard of something like that.
Speaker5: [01:13:16] Plus like I can see I can see legal, legal folks with cuff links like their conflicts must be tangling with like legal implications, copyright implications. Right. So all of these things.
Speaker4: [01:13:28] So this company that we're all building, we're going to create an AI bot version of Sam Bankman-fried, right? That's that's the goal. That's the ultimate.
Speaker2: [01:13:35] You said that, not us.
Harpreet: [01:13:37] So that's good.
Speaker2: [01:13:40] Maybe Jordan Belfort Matt said, I don't know. That'd be funny to.
Harpreet: [01:13:47] Are you reading this?
Speaker2: [01:13:47] If I have a question for everyone. I'm sure you saw the latest good stuff. Like how incredible the text generation is like. Did you see, [01:14:00] like, the one where I'll explain why this algorithm implementation of like, bubble saw is wrong in the seventies slang or just it was mind blowing, really. What do you think the impacts are going to be of such technology on the creativity of variety. Like if you imagine a future, someone who creates content can just be like, Write me a post on delayed gratification and just boom, boom, write. Like, how do you think that's going to change how content is created from a writing perspective?
Harpreet: [01:14:36] Yeah, I think I've ever seen that. It was like stayed in the style of, like, a gangster. And I was like, Hey, you see here the problem here with this is there's already companies out there. Like, I think Jarvis is one of them. Drivers has actually a company that does like help with copyrighting and stuff like that, but. Yeah, I'm excited for it because I think it's just augmenting human creativity. I think at the end of the day, the prompt goes in, the stuff comes out, but it's still the human that has to like look at the stuff that comes out and determine if it's decent or not and then, you know, add their own kind of spin to and remix it. But Kosta, let's hear from you.
Speaker4: [01:15:15] Here's what that breaks down, right? So on a on a general when you when you're creating generalized content, fantastic, right? When you're creating like like you look at it with we were talking about therapy influencers, right? A lot of their content at a general level, very, very powerful, very good, very useful. Right. But how do you individualize that if you have to individualize the output of something like GPT three if you had to create generalized content, fantastic. But at the end of the day, if I had to, I had to ingest therapy. It has to be specific to how my needs are. Right? So how specific could you actually make it? How responsive could you make it to the nuances of human psychology, particularly from psychotherapy [01:16:00] standpoint or from a financial advice standpoint or from any of those things? Right. Like even from a like I mean, you said you and I were talking about this a few months ago about how do you create like brand content, right? You could auto generate brand content at some point without needing to spend as much energy. How much of that then can you actually nuance? Like how nuanced can you get with that brand content that's being generated and how kind of generic does it become? Is is kind of my tack on to that question almost is to say this could get really good for generalized, but what's the step that you'd need to make to make it nuanced enough to deal with individual The individual aspect of that, particularly with therapy bots, particularly with branding bots and things like that.
Harpreet: [01:16:49] Mark. So you had your hand up, but I guess maybe.
Speaker6: [01:16:52] Uh oh. I was a tangent, so that's kind of related, but I can hold off to that later. Just, just an idea I have for all these different creative, generative.
Harpreet: [01:17:04] Let's go to Mexico. You saved them.
Speaker2: [01:17:06] Greg Yeah, in terms of nuance cost, like if if it's able to mimic a seventies gangsta slang, why couldn't it mimic than the ones that you have potentially? So it could be fine tuned on your stuff and then not need much data and just keep generating content that you generate. Right? That's my take on the nuance.
Speaker4: [01:17:34] Scary shit, though.
Harpreet: [01:17:37] Greg, let's hear from you.
Speaker5: [01:17:38] Yeah, I guess the question is that I haven't been able to answer and I've think about constantly when it comes to, like, generative. Ai is like one who owns, like the output, like the, the, the rights to the output, right? And then and then who needs to be rewarded, right? So for example, a lot of the things that they get trained on, you know, it may be [01:18:00] somebody else's code, it may be somebody else's art and things like that. Right. So when when the AI generates an answer in a certain style, you know, does the artists, original artists that gave their data for that model to train on, does it get a commission on it, or should he or she get a commission on it? Right. And then I understand that the person who created the prompt should own the copyright of the output. But at the same time, you know, if I'm an artist or a copywriter who, you know, willfully gave my data for that model to be trained, should I get a little bit of commission on that output? Right. Who, you know, that gets consumed downstream, you know, And with that, how does all the legal implications, you know, take place around that? So I keep thinking about those things. So when I see something like chop chop, is that what it's called? I think about that a lot in terms of how does everybody benefit from it? Because I can think of many cost centers for maintaining a large language model. But there's got to be some sort of like payments to people give their data for training this model to as a cost center, too. So that's just my take.
Harpreet: [01:19:21] I'm wondering, like, does the person who created the keyboard, does he get a cut of royalties every time somebody gets paid on Medium? You know what I mean? Like, I hope that analogy is making sense. I'm just kind of saying, like, something something is is is a tool for people to to use, Right. Yeah, that's, that's, that's the thing I like that. The issue I have that is like yeah, it might be sampling a bunch of different things, but we do that naturally throughout life. We're inspired by a bunch of different things. We take in ideas, meld them, mesh them together and create something new from them. I do. I don't necessarily have to go and say, Oh yes, by the way, [01:20:00] this idea for this startup I had came to me while I was walking down the street and I saw a billboard, and now I have to give these guys like .00 5% of revenue or whatever, you know what I mean? That's that's the the issue I have with that is like that, you know, like for every house that gets built, do we send a royalty check to the inventor of the hammer? You know, I mean, uh, let's go to Mexico and, uh, and then after that, we'll start to wrap it up. I've got to head to a hockey game.
Speaker3: [01:20:27] So. And this is like something that I was talking about with Karadzic for people who know my better half, right? He's a designer. He makes his living off of making really beautiful stuff. And the thing that we were both discussing is that like right now, for example, to generate the three images or assets, okay, so you throw in a prompt and you get an image. Can an artist actually work with that? Yes, but that's actually not how most designers will work with assets. Right? When you are working in Photoshop or Illustrator or whatnot, as a designer, you tend to work in layers. So for example, your background will be a layer, your items within it will be a layer. So I'd argue that like if you for people to say that Generative II is like enabling artists. You would have to show me that the output, for example, like very strictly, is in fact in a form that an artist would actually use in their workflow. Because right now, if you do an image or you do a video, it just spits it out. Like as a single file, maybe as a video, you might get the image in the audio. But once again, like that's not actually how most artists work. So to a certain extent, right. I think if we're going to make the argument that we want it to be enabling, we want to be assistive, we kind of need to actually show proof of that.
Speaker3: [01:21:51] Something that I thought was very cool that MailChimp did was so we had a product called Creative Assistant, right? And so our main customers [01:22:00] are small, medium sized businesses and they actually use generative A.I. in a way that was meant to provide value to small, medium sized businesses. How do they do this? Well, the customers or the businesses would provide their design assets, so color palette, logo, sample of body text, right? And then we would essentially generate transactional emails or marketing campaigns using the assets that they provided. Right? So the goal wasn't to try to replace their business. The goal was to actually automate like a low hanging fruit task that, for example, if they were to take it to fiber to try to have to design every single asset, not only is that kind of sort of dull work for a designer, but it's also something that would be like very hard to price and like batch, right? But that's not to say that they would have to replace their designers, right? Because at the end of the day, you still need to have new creative, unique assets to put into the training pipeline to essentially get a better, more personalized output. Right? So there I think the use case was very, very clear to me as to how I was like helping our customers or users. But I think in a lot of cases, when's it happening? So for example, copilot, the reason why a lot of like engineers and open source contributors were really upset about the way Copilot was developed was because essentially copilot had just been trained on all GitHub repos regardless of the actual license that was attached to that repo.
Speaker3: [01:23:23] So with open source, right, there are certain things you can and cannot do depending on the license that is part of that project. But at least initially copilot was trained on everything and no attribution was given. I think that and there's also some other examples, right, where the design elements or the style was directly like ripped off like a training data set and no attribution was given to the URL at all. Now all of us get really mad when people steal our LinkedIn post, right? I know a lot of us that have had that happen to us. We get really mad when it happens. How is it suddenly that it no longer applies to like this other [01:24:00] use case? Like why is it that our LinkedIn posts are holy and sacred? But an artist who spent like 1015 years developing their skill set, they create this body of work that's unused in a training mode where they don't even get any attribution. Why is that suddenly okay, especially if they rely on that for their living. So that's kind of my my $0.02 cost tip.
Harpreet: [01:24:21] Go for it.
Speaker4: [01:24:22] Because it's cheap because ultimately, like we've come down to this conclusion so many times on this on this happy hour that things are run purely for the benefit of profit. The way we the way we essentially look at entire economies is just that GDP breaks that entire cycle, right where we're looking at productivity as just the sellable asset. So if you can make something for cheaper, then it's worth doing, right? If you can write code for cheaper that it's worth doing, if you can create image assets cheaper, apparently according to our current society, it's worth doing irrespective of the implications of genuine infringement of creativity. Right. And people's ownership over their own content and material. I totally agree with you. It's completely insane. Right? I don't I think to a point you see this in the music industry as well. You see a lot of re sampled material like I listen to one song and I'm like, hang on one second. Isn't that just the intro riff to like Michael Jackson that straight out of Michael Jackson? And they're just use that as like a background beat to something else? And I'm like, All right, okay. I take torque content. How much of it is just, hey, there's that one song that everybody knows, I'm just going to slip this over and just kind of dance to it. The number of YouTube shorts and Instagram reels that I'm seeing that are literally just someone else's YouTube short that some dude is watching and you watch this guy watch that short and nod to it and shake his head and go, Yeah, man, that's not content.
Speaker4: [01:25:57] At what point is that content, right? So we seem [01:26:00] to be okay with stealing shit from people as long as it's for the benefit of making money. And that is mind boggling to me, right? The saying bullshit baffles brains keeps coming back to mind for me. I should probably get that tattooed. If I ever get a tattoo, that's probably going to be it. But seriously, like at some point I see this like where this worked and I and I agree with you, right? Like, if I had to do a generative soundtrack, I would want it in layers. I would want the the drum, you know, the drum line and a separate MIDI output. I'd want, you know, the electric guitar line and a separate MIDI output and things like that. If I was to compose something generally and we kind of do that with samples and, and loops and things like that, but in the music industry, all of those loops and samples are extremely well controlled in terms of, in terms of copyright. Right. We haven't seen I hit the music generation nearly as easily as we've seen the image generation. But yeah, it boggles the mind, right? Even if you could get that in layers, is the right attribution going to the right people? It's a huge issue. And I mean, like back to the GitHub copilot thing, didn't they just get like a lawsuit filed against them earlier this month? Not this month, a month ago in early November? It's I, I mean, I didn't really think about it that way, but now I'm like, hey, hang on, I've got bucketloads of code at companies that I work for with sensitive IP, right? You're talking defense contract software that I've created that I know for a reason.
Speaker4: [01:27:34] Hey, you should use this only to a level of proof of concept or this is actually a like code that works and it's tested well and it works for this. But if I know that part of that code could potentially end up in the hands of someone where it doesn't really make sense and they don't really know the context and the limitations of that code, particularly if it's machine learning models. That freaks the crap out of me. So I. I see there being at [01:28:00] some point other companies coming up. I don't know what GitLab does, whether they have any kind of copilot move that they're going to as well. But I'm seeing companies moving to GitLab. They're happy to change provider. It's just GitHub. It's a source repository. Like how long until there's someone else who just provides something? Dags Hub came across for data scientists and actually presented a little bit of a market threat at some point to a very small proportion of people. True, but I don't think we're limited from saying, Oh, we'll just use a different platform then. And I think that will happen if we see more of this. I think the issue is that it's a little less traceable when you're talking about generative stuff, right. How can I provably say that this piece of content came from me when it's a generated image that kind of looks similar but is different enough because it's amalgamated from a few billion other images?
Harpreet: [01:28:56] Yeah, but it's still it's still the human has to give the prompt that has to supply those ideas with their own creativity, even though it might be sampling from from a bunch of different things. So I think it was yesterday. Yes, it was yesterday. I did a AMA session with Dr. Tristen Burns, who made like a he's called a hexagon machine. Look him up on Spotify, look him up on YouTube. He does a lot of AI generated music. And it was interesting the way he, he, he did this. I think he I think it was like 4000 or 400,000 songs he sampled and it was like MIDI files, songs transcribed as MIDI files. And he took that those MIDI files turned them into musical notes and then use GPT two to create more musical notes that he then had to go and enter into like a synthesizer and, you know, whatever you use to make MIDI music with. But he's still the one who had to provide the, the you know, he's still the one that had to provide the prompt and all that. Yeah. Um. So yeah, I'm sure we can go on this for, [01:30:00] for, for every Anybody wants to have one last closing thought, let me know because I got to start wrapping it up. Shout out to everybody that's been here through it all. Go for it.
Speaker4: [01:30:10] Because the one last closing thought, everyone in the room. I think the one thing that I've learned in what is that year and a half, almost two years, like a year and a half that I've been joining this podcast, Happy hour, whatever you want to call it, is that find yourself a room of people that can teach you just how much you don't know. They can teach you, just how much you don't have a clue. I'm looking at Vin, I'm looking at Greg. The knowledge bombs. Russell making everyone across the room. I know Joe, Joe's not in the chat anymore, but geez, like the knowledge bombs dropped by everyone in this room, you know, surge, surge, half the stuff you say in the chat on the side that I just wish you said it out in the actual chat. I'm just like, okay, damn right. So everyone in this room, I'm not going to name more names because I'm just going to lose the list and we'll be here forever. But find yourself that room where people teach you that you just don't know, right? How much more you need to know. That's been this room for me over the last year and a half. I'm sure I'll find another room like that. I think I might have it work already, actually. But yeah. Find yourself that room, guys. If you're out there, you're looking for ways to grow and just learn where you are. This has been really great. Thanks. Hopefully, that's been amazing.
Harpreet: [01:31:25] You're welcome, man. It's been my absolute pleasure. Thank you all for for being here, taking time to join, like. Like, you know, all those times where I couldn't host. For whatever reason, you guys stepped up and took over as host for me. Thank you for that. Thank you for being here and just chatting and creating helping me create this space. I would not have done it without you guys here. I think this is, like I said, the most important thing I've ever done career wise. And I know it's like a lot of goodness has come from for everybody in this room, just from being associated, affiliated with each other and just saw the opportunities and things that are happen for everybody. I'm glad [01:32:00] for you all. I'm so happy. I'm excited to to see. I mean, we're we're still in touch, man. Like, you know, we're still in touch. Most of us have each other's phone numbers, you know, various chats and all that. It's not like this is the the end, you know, it's the the end for now. You know, I got something with Mark and Kiko coming up. I got my deep learning channel. I'm still podcasting, right? I'm still doing like AMA sessions where you guys can jump in and have conversations with, with experts and stuff. But yeah, it's been amazing last over two years. I really, really appreciate everyone here tuning in. If you're listening on the podcast, I know there's a lot of you listening after the fact. Shoot me an email. You guys know my email address, The [email protected]. Appreciate all of you guys. Thank you so much. And remember, my friends, you got one life on this planet. Why not try to do some big change your.