Listen to the conversation between Wade Erickson and Christopher Lafayette, Emergent Technologist and Founder of GatherVerse, in which they covered the passage from Deep Learning to Deeper Understanding.
Join Wade Erickson and Wade Erickson, Emergent Technologist and Founder of GatherVerse, as they explore the evolution from Deep Learning to Deeper Understanding. Discover how these advancements impact technology and society.
Key Takeaways:
Wade Erickson (00:13):
Welcome all to another episode of Tech Leaders Unplugged. Today, we're getting unplugged with Christopher Lafayette, founder of GatherVerse. Our topic's going to be AI and the quest for super intelligence beyond human boundaries. And so I think a lot of what's been in the news over the last year has really been about generative ai. And a lot of the fear talk is around these superintelligent general ai, where the machines start to build their own conscience. And that is the area that we might be talking a little about. What's the extension beyond the stuff that we're playing with today and where the applications can start to drive a lot of their own decision making and, and stuff like that. So, Christopher, thanks so much for joining us today and sharing with us your background and your knowledge in this area. Tell me a little bit about yourself, a little bit about GatherVerse, and then we'll jump in the topic.
Christopher Lafayette (01:09):
Well, first off, thank you Wade, and thank you for the whole entire Logigear team, for having, for having me to be able to come on by today and spend a little bit of time with you. It's definitely a gracious invitation and I'm certainly happy to always find myself when it comes to having talks such as these to be able to spend more time with technology platforms or communities that are really as I say, chopping wood at such a level that I, that I know it helps to be able to extend the many voices of people that have so much to express and so much to contribute to a global dialogue and a conversation. A little bit about my background. I'm an international national speaker, a merger technologist at humanitarian here in Silicon Valley. And someone that has had the opportunity to work with many women and men and been taught by brilliant minded women and men in industry and so many different sectors of technology and, and to have the opportunity to gone the places that I've gone and seen, the things that I see and the work with the people that I work with today. And to be a steward in servant leadership when it comes to platforms like gatherers.org or when it comes to different ventures that we're dealing with, with some breakthrough opportunities that we're building with oglab.org. Really just find myself in a thankful position to be able to experience and understand more about technology. And that helps me and allows me to understand more about some things in life and things in life to me that are most essential. And we're at a time that we've never been before this way where we've never been this close to unlocking physics and we've never been this close to better understanding science with the relationship with an applied science and the relationship with that, which is technological development. And I'm happy to be here today to be able to share more.
Wade Erickson (03:11):
Awesome, awesome. So our topic, AI and the quest for super intelligence beyond human boundaries, tell me a little bit about that idea that you came up for a topic and, and, and what it kind of excites you about the future. We get a lot of naysayers about that super intelligent aspect the Elon Musk and others that are quite concerned which I think is, you know, rightfully, you know, you should look at the good a and the bad of anything like that. But tell me a little bit about, you know this topic and let's just jump into it.
Christopher Lafayette (03:48):
Sure. Let's kind of roll and, and let's, let's go for what we know. Let's kind of dance for a second. So a lot of people may hear different words like a GI, artificial general intelligence or super intelligence, and part of my attempt today is to disrupt a little on the rhetoric that's used, because what we have here is a world of many different technologists and those that are non-technical that use words that they have no idea what they're using and they have no idea what they mean. And a lot of us that are building and developing and have been in this space in AI for a number of years, and for us that can actually prove it beyond the, so-called the occasional LinkedIn expert which is totally fine too. I think we've become so exclusive as technologists that other people can't become an expert or other people can't learn sooner rather than later. And I think it's great that anybody who has a desire to want to build and make technology that's doing it in a responsible fashion and not taking advantage and doing right by technology and how we build emerging technologies, I'm always an advocate for. And so when we start talking about super intelligence or a GI and theory of mind and self-aware and cosmic ai and narrow AI and simple AI and all of that, I want to do a sense of putting this in the right sense, because a lot of our motivations and what we're doing when I see all these different LinkedIn posts, or I see all these different white papers, and I do read these white papers, and I do read these LinkedIn posts, and I do read these books, and I have read books from, and I have read, read Soleman, you know, the Coming Wave and Homo Technological, and a lot of these people that are writing these books, while it's noted, and some of these are great literary works, if you look at what we've seen happen in the past year and six months, a lot of the things that they have said is either erroneous or it hasn't come to pass, and it will never come to pass. And what we're really talking about when we talk about and hear some of the people talking about in the world, artificial general intelligence, I'm convinced that a lot of them have no idea what they're talking about, and they're getting this from people that are either no longer living a b, that we're looking at something completely different based on wild hypothetical assumptions. And c we're talking about this as a, as a general colloquialism and not something that's actually grounded in reality. You know, when you talk about EGII, I, I get that, that's something that's a colloquialism. But to be able to minimize the incredible magnitude of intelligence itself by simply calling it general, to me, that almost doesn't make sense. And I know I'm not the only technologist in the world that thinks this way. There are others who I have good regard for, and there's those, I'm happy to debate at any given time on this. But let's go back for a moment because I want to make some sense of this in terms of the linear trajectory. But when we start to think about AI as a whole, and we start to think about where we find ourselves today in this new AI era. And so when we kind of go back and many people are familiar with this to, let's say the farming revolution, there was a time where that was the mainstay of society and global civilization. When we start to think about farm culture. And at some point when we started moving into the industrial age one and two, we talk about it being an industrial age, but what it really was, was a disruptor to that which was before. And a lot of the tools and a lot of the resources that they may have used in the farming age and the processes and the methods, so much of that got disrupted. And then we found ourselves in another revolution or another age of work or business or operation in the world itself. And so what happens is, is now that you're in this world, this new industrial age one and two, what happens is, is that something was accelerated and a cadence was updated and a new tempo was set. And typically what we'll see from revolution to revolution and scale is that things become more quicker at, at some capacity, communication is updated and it flows much more faster in the world of business. And so from industrial H one and two, that held on for a very long time, and then it brought us through the fifties, the sixties, and the seventies in the eighties. In the eighties when it comes to technology was a very, very prototypical generation, I would say almost more so than the sixties and seventies, because depending on what emerging technologies that you're looking at, which is 270 plus emerging technologies that abide within the eco habitat, there are people that were writing books, whether it's familial and spectacles, when it comes to consideration of xr, there are those that were building artificial intelligence when it comes to MN and Rel on campuses of Stanford, there were those that were architecting and building out the perceptron. And this is between the fifties, the sixties, seventies, and eighties. But a lot of these were pioneer effects. And obviously we have those that we're building and that we're writing that the considerations of what we would look at when it comes to ai such as those as Alan Turing, you know, whole entire universities have been made based on Alan Turing. But if you go and look and actually read some of his work when it comes to the benchmark set standards for deter test itself, he didn't write a whole lot about that. And how you would actually quantify and qualify the levels of AI that have now come to pass and that we're looking at when it may come to generative AI diffusion models and so forth and so on, and convolutional neural network scans and more. And so how are you able to quantify the level of what we talk about simply with a GI versus theory of mind, self-aware or even super intelligence? I know that Nick Strom did his best at coming out with a book for super intelligence, but that certainly doesn't mean that that's gospel, and that's how it's going to be. It's just one singular thought of what a guy thought and what it would be prospectively with all the different things that are considered with AI at that present time. We have the Ray Kors while that says, Hey, we're going to reach, you know, some serious technology here by 2029 and by 2045, he says it's going to be the singularity. I categorically disagreed when it comes to Kurzweil, his 2029 assumption. I think it's a wild guest that's stuck at suspended in air. I wouldn't be surprised it came sooner rather than later. And when it comes to respects at the Elon Musk, it's hard to believe. I think Elon Musk is a brilliant innovation technologist. I do. But when it comes to where he, where he stands when it comes to artificial intelligence and growth and development, you can't convince me that on one hand this time last year, you were signing letters, getting people to be able to buy in, including Steve wc, to get the buy into the idea of slowing down development of technologists or just type of technology on one minute, and then the very next minute in the next year. You literally, quite weeks ago, did the biggest funding round for a startup we've ever seen at that early stage. We've never seen anything on the books that big. It was over $6 billion and one fell swoop that superseded what we saw years ago with Magic Leap that took years to be able to accumulate the billions of dollars that it did on the startup level. And so on one hand, there's this slow down, but then the next minute we must hyperscale and hyper accelerate. I'm an accelerationist. I am a huge fan of the hyperscale. I am a huge fan of disruptive technology for good. So when we look at these type of technologies that we see and the technologists and the things that they're saying, we do have to take it with a grain of salt and to find out and understand what are their motivations. And the reason why we highlight some of these technologies is because the level of the influence that they have when it comes to the world of ai, when it comes to what we're looking at, even when it comes to scale laws, even when it comes to the orders of magnitude of what we're seeing when it comes to the development of AI infrastructure and sovereign AI development. And so we have to really take a look and understand that a lot of the proprietary systems that are out here, the big ones in the world, such as the open AI platforms or the Mytral platforms that are building incredible things in Europe, along with the relationship that they have with AWS or Amazon, if you will. And then we see what the folks are doing when it comes to Microsoft and all the different things they're building when it comes to the credible copilots ecosystem, and then the deployment of these massive GPUs. And pretty soon we'll be seeing Blackwell and then more different types of GPUs and architectures that are coming out and, and what we're seeing when it comes to a cuda and the incredible things that Nvidia is doing, and then let alone a MD and then some things that Apple just put out recently and the new cycle for this week with Apple Intelligence, which is extraordinary. And thinking about these foundational or large language foundational models, while I know so many people are paying attention to these large foundational models, I think what really slipped under the rug and hugging face are the small foundational models. And right now I'm more concerned and I have more of a focus on the small foundational models than the large language foundational models because the small model, but the level of refinement of the dataset that they have are just extraordinary what they're able to do with Amazon Fi and then Apple Elm coming out with the efficient learning management model. And so when you look at an 8 billion parameter in comparison to a 350 billion parameter say of GP three, five, and the 8 billion could be just as potent as the 350 billion parameter when it comes to what's rendered in discovery. I find that to be very fascinating. But nevertheless, let's move on. And so as we continue to go forward through the ages and dispensations, if you will, when it comes to progress, now we've entered into the information age of the nineties, and when you enter into the information age, what we had to ask ourselves is what happens when you give a whole entire civilization access to information they never had before? What happens when they enter? And that's received during quite literally the industrial age way of work. Well, what we saw, Carlos, is that when they received all of this information when we did in the nineties by way of the internet, if you will, and, and, and we must remember that technology moves an increments, shifts and leaps. And an increment would be like an iOS of data or Android update something small, but it's impactful and it it impacts millions of people. And then a shift would be something like a new headset, mounted display, h and d that's deployed or a desktop, laptop, tablet or something of that nature, a mobile device. It doesn't affect everyone almost the same but it certainly is impactful, again, the millions, but then a leap, we don't see those two often. And a leap can be web one, web two, Web3, the cloud the internet itself, the metaverse, ai, blockchain so forth and so on. And so those specific types of technologies have the capacity to be able to affect whole entire civilizations categorically. And so when we entered into the information age, we treated the information age and such that we ran it with industrial age procedure and protocol because we'd never been this way before, which is totally fine. But in this hyperscale, if you will, in the nineties into the two thousands, it created this voluminous bubble. And many, it warned that there could be a bubble that would burst, and indeed that bubble did burst. And so what happened when this bubble burst, the ecosystem had to write itself, it had to adapt it, it had to change. And then you had people like Steve Jobs that would get on the stage years later after the Burst. And I remember here in Silicon Valley it was scarce. You had four lease signs, you had four rent signs all over the place in places where there was a time where you would see technologists everywhere because we here in Silicon Valley had been doing technology for a very long time. And I hear other different people around the world saying, well, we'll be the next Silicon Valley. I always have usually two things to say to that one, excuse me. Don't, don't try to duplicate what we do in Silicon Valley. One because you're not going to, and two, you have your own unique IP of what you offer ecologically to the world. You'd be better off staying in that and innovating on that level and becoming whoever you're going to be. Two, you can't duplicate the different decades of how long we've been innovating a building with platforms like HP that are coming near 90 years for how long they've been around. We've been doing this a very long time, and it's handed down through Degenerations to us. And a lot of us learned by way of osmosis, so much of what we know about technology, you can't go to schools, Stanford and other different universities because by the time it reaches there and it's developed in curriculum, it's already been rendered, and we're already looking towards what's next. So they're receiving the 1.0 of a thing, if you will, and we're already going to next. It's not to say that great innovation isn't happening, but even on college campuses, there's more innovations that are happening outside the classroom than in the classroom on campus. And so in the student, in the student populace is they know that that's why we enjoy academia when it comes to our students and what they're innovating and building. And some of the greatest platforms in the world have actually been innovated in dorm rooms as it's been told of legend, if you will. And so you see that happening. And so, but let's move on. So, but they were operating in building from in this new information age and what someone like Steve Jobs, the reason he was so iconic, it's not because what he was able just to do with Apple and the the comeback story, if you will, from being fired from Apple and coming back, that's, it's more than that. What he is able to do with a lot of other people who, we accredit this with Steve Jobs, but there was a lot more people in Apple that were building these things that allows Steve Jobs to be able to present what he presented when it came to the iPhone and the applications. Era applications weren't new when he announced it, but the way that he was able to present it to make it less abstract and more concrete and understanding with the design principles of the intersection of liberal arts and technology, nobody did that better here in Silicon Valley than Steve Jobs and Apple. And so we saw that happening even, even with as mighty as Adobe has been. And we really care for those that are Adobe, and we have people that are part of Gatherers community that are with the Adobe community. But even there was contention at that time with Apple and Adobe when it came to design considerations, and there were some really powerful things that happened. And there's a lot of stuff that's written about that I encourage people to take a look at, to where Steve, Steve Jobs even, you know, wrote a letter based on some of the decisions that Apple wanted to move in versus where, what Adobe wanted to do at the time. But anyway, that's, that's a, that's a lot of old but very powerful history. But I'm saying is that in 2007 September, we entered into the applications Renaissance. It wasn't an era, but it was a renaissance. And we entered into the applications Renaissance. And what ended up happening is that from essentially 2007 Q3, if you will, or Q4 to 2019, before the pandemic, we were running into applications Renaissance, which then what that allowed Carlos is for people to have access to applications, meaning to apply. They had access to applications, and they were able to build tools and to build things they never had before because now there's a de there's a level of democratized access that we've never had. Now take that one point of line of democratized access to general civilization, and at this point it's spread out throughout the world. With Android, it's spread out through the world with other different operating systems beyond Android and beyond iOS. And so now all the different popular from trans metropolitan Transcontinental had access to tools they've never had before. And so now that they've had these tools, they began and we all began to innovate and to be able to build. 'cause Now we had available tools and now we had the cloud in which to be able to do it. And then that is what ultimately has brought us into the most disrupted time that we've ever known contemporary to our time is in the 2020 sector is where we experienced a pandemic. And by way of COVID-19, no matter where you stand on the side of the issue of the pandemic itself, one thing that's clear is that we have become more virtual at that time, and we now are still more virtual in the past 59 months than we had the past 59 years. And that we ushered in into a new era of work, and it was hybrid by way of nature. We had remote distributed workforces and product delivery teams that this day are still working out of their living rooms and people are back in the offices. And so there's a nice comfortable fit where there's enough sufficient cloud systems and there's enough in-person systems for people to be able to work and a almost a harmoniously fashion. And we as a society have found a way to be able to make this work for businesses in communication. But what we haven't had time to process and consider is that once again, just like when it went from the farming age to industrial age one, industrial age two information aids, the cadence in the tempo of technology has accelerated. Now, what made it so disruptive by putting us into virtual ecosystems of commun interaction and communication, that is what helped bring along further the metaverse. And by way of the metaverse being a virtual ecosystem of interaction and communication, what we know that are part of the XR community, which I've now said that AI is actually an extension of XR as well because it's an extension of human reality. When you start thinking about human inte, now we're starting to see all different types of platforms come to surface that they're not new. And I heard somebody the other day say that generative AI was new, it couldn't be anywhere further from the truth. It's not new, it's emerging. And so these technologies within eco habitat are merging, scaling, and pairing together. It's what we call the emerge pairing. So now when you see things like non fungible tokens or you see crypto assets and currencies and meme coins, altcoins and real world assets deployed on blockchain, on chain technology, or you do see things like the metaverse or of hyperscale in the tens of billions of dollars that have been invested in augmented mixed reality and virtual reality, well, that's going all for a reason. But when we see all these technologies that come to consumer index and become sudden billion dollar, multi-billion dollar industries, then that starts to kind of change the game and say, wait, what happens when the technology becomes more advanced than a technologist? As I said many years ago, it's documented in several different occasions years ago, but now that's come to across because of what we see with the advent rise of machine learning and deep learning and neural networks that have been really catapult out into society in such a way that not even some of your best technologists knew, and by their so-called predictions that this is going to happen. That's why I dispel a lot of the different, so-called futurists that are out there because futurists are looking at garner's hype cycle and garner's hype cycle is so limited in minuscule compared to even the eco convergent framework that have been working on for years. I like that they've created the peak of inflated interest in the tr disillusionment slope of enlightenment, the plateau of productivity. I've always thought that that line was brilliant. But in terms of the different technologies that they represent at Garner's Hype Cycle, it's completely minuscule. And, and garner's hype cycle is set for a nice, good disrupted self. It was good for the time now present, but garner hype cycle is hype. So when it comes to what it is that we're really dealing when it comes to representation of all the different types of technologies that we see out here within the eco habitat, there's only two things that can influence each technology within the eco habitat of 270 plus technologies. It's artificial intelligence and it's human intelligence Yeah, itself. So this is what brings us to our session today. When we find ourselves today and looking at the different types of models that are being deployed that we're using, that we're consuming, that most major companies and startups are still wrapping their mind on how to use ai. We're only, only now at 37 to 38% adoption rate when it comes to GPT Gemini growth and other different models that are out there. The whole world hasn't even began to use this. Most of 'em will still treat, treat AI like it's a search query. And so when we start talking about the, so-called Path to a GI, the so-called Path to Super Intelligence, my assessment is that we will never reach super intelligence as we think of what we know it as today. 'cause On amorphous scale, once we do reach this level of a GI, which many of us believe that we're seeing glimmers of it today, and we in fact and indeed are, but it hasn't completely arrived to been deployed yet. But when we see that on a morpho scale, I believe that what we now refer to as super intelligence will be disrupted. And we'll see something completely and categorically different because the input has come from the human mind, if I can see it like that in terms of pos prospective trajectories. But we haven't taken into consideration the capabilities of these models that we're also now dealing with. So the human in the loop and the robot equation will be considered when it comes to the future trajectories of ai. And I do believe it's time for fresh naming conventions beyond what we look at as super intelligence. And I believe that we also can find a time when we look at super intelligence at what we thought would be almost the nexus or the zenith of artificial intelligence beyond cosmic or before cosmic ai. I don't want to go that too far to the cosmos with that, but what we're looking at, I believe that there's more to artificial intelligence horizontally than looking at a linear progression of acceleration and scale to what we, none of us even imagine. What this would be and what this would look like, how it would function and how it would perform. There's too many intersecting technologies, and most people when they see AI, Carlos, and I'll be done with this for a moment, and we can talk more on this. Most people when they look at artificial intelligence or generating ai, they don't understand all the different things that make up AI. They don't understand that there's so many different, it's AI is not a single variable equation. It's a multi-variable equation. There's so many things that make up artificial intelligence algorithmically to the process, to the neural network, to the weights, to all the different types of algorithms that are integrated in to the, all the different levels of reasoning that make it what it is to all the different things that it takes to even scale it, to different scale laws and how scale laws even work empirically. There's so many different things that make up artificial intelligence. And so to be able to have these wild prediction and assumptions of what it's going to be and extrapolate the points and put it on the curve and gauge the future gates of trajectory, a lot of that is up. And that's why we continue to not only disrupt, and that's how the disrupt comes about, because the disrupt a good thing is superseded by even a greater thing, and a greater thing is even superseded by even a better thing. That's what we commonly refer to as the disrupt. But what really is happening right now in a lot of ways is that we're disrupting our own selves. And many of the technologists can seem to agree with each other. You have Professor Hinton on one side, Jeffrey Hinton, who many refer to as the godfather of AI. And you'll take someone like Jan Koon. And there's a lot of things that those two agree with, but there's a lot of things that they do not agree with, who's right. They're both brilliant minded in this world, in the I diverse and all the other different in the Yoshis and all the other different brilliant technologists that are out here. But there's a competing contrast and the dichotomy difference to what it is that one says or the other, including, you know, Fefe Lee, who's one of the most brilliant minds in the world when it comes to AI. So a lot of them are competing to be able to be heard themselves, but at the end of the day, between the Kurzwell and others, who's right, who's wrong, what I do know is that we're in an AI arms race globally, and whoever has the most powerful compute capacity and the most powerful GPUs and the most powerful AI, and a lot of ways we'll be able to dictate the forward working progress of where we go and how we even deal in naming conventions and actualities when it comes to AI and the workforce of the world. And so, when we see incredible platforms, and I'm glad that we have leaders such as Tsing Wang, that I do believe that he comes from a really cool place when it comes to innovation with Nvidia and Satya Nadela. But even they are with their flaws, and I do work with them part of their ecosystems, but even they have their flaws. And I think that they would even em admit that to themselves, that on level of vulnerability, we as founders, we as leaders and entrepreneurs, and I'm not putting myself in their category, if you will, they're dealing with things that, and walking the past. Most people haven't. But at the end of the day, we still must consider humanity first. We must consider accessibility and education and community development, equality and safety and privacy and wellness and ethics as the most utmost concern when it comes to the innovating level. So those are some of the things that I'm looking at when it comes to the path to super intelligence. If it's not built with the consideration of human betterment and social responsibility and sustainable outcomes, then we are all headed in the wrong direction when it comes to technological innovation.
Wade Erickson (31:22):
Wow. Thank you for that. I didn't want to interrupt you. Were quite a roll on a roll there. And so but yeah, I think that's a great place to wrap up. That in the end the, the use of ai those that control it, I, I think could be more powerful than even those that control energy grids and other kinds of things that have been a differentiator between different countries and their ability to climb into the upper levels of equality of, of country to country and those kinds of things. So I great place to wrap that up and that the history here of ai, where it's come and, and, and its patterns of disruption and those kind of things. So I definitely appreciate what you've shared. It was, it was largely your time and I, I'm glad to be able to allow you to share all that on the show. So real quick before we wrap up, I wanted to introduce next week, Shouf you have a submit a second here. We have, next week is Philip Dye QA, lead of eMarketer. The topic is going to be a culture of quality building community through collaboration leads and shared visions of quality. It's going to be on Wednesday the 19th at 9:30. Look forward to having you all here next week at the same time. And Christopher, thanks so much jam packed with great information, points of reference, and just, just, you know, the level of knowledge you have is just amazing on this area. So thank you, thank you, thank you so much for sharing it with the, with the community here. And people go grab his other information. He is lots of podcasts, lots of speaking engagements, so if you want to hear more from Christopher we share the taglines in in, in the presentation here as well as, and you can watch the recording, it's available immediately after as well as hit up some websites. So do you have any books or anything in, in the future for yourself? But I mean, that's a lot of speaking and stuff leads to books and stuff. What's your thoughts on that?
Christopher Lafayette (33:46):
Yeah, a lot. I wrote a book and a lot of people want me to release it. I've shared some of it with quite a few people in, in, in my ecosystems. But I've wrote a book, it's called Surviving in Silicon Valley. I'm hesitant candidly whether I want to publish it or not. There's a lot that's packed in there, but I've been writing it since 20 maybe since 2015. So there's a, there's a lot that's in there that leads up into that. And it's about maybe 50,000 words long. But honestly, the idea with what we see with how many people are producing books by way of ai and AI being the majority of the writer, I, I don't know how comfortable I feel. And I don't know if honestly anybody would want to read a book that I put out there. So that's that as well.
Wade Erickson (34:33):
You'd be surprised.
Christopher Lafayette (34:35):
I don't know.
Wade Erickson (34:36):
Alright, well thank you so much for your time and as we wrap up and for everybody else, hopefully we see you next week.
Christopher Lafayette (34:43):
Thank you, Wade. Thank you Carlos. Thank you Logigear.
Emergent Technologist
Christopher Lafayette, an influential emergent technologist, speaker, and humanitarian from Silicon Valley, is making remarkable strides in the metaverse, AI, medtech, and Web 2.5 + sectors. He established Gatherverse to merge technology and human experience, ensuring every digital innovation reflects and caters to the aspirations of its users. He also founded HoloPractice, merging healthcare and technology. Along with Co-Founding Aug Lab, intersecting life sciences with AI and immersive technologies. With speaking and advisory roles held in organizations like Google, Meta, the European Commission, US Government, Microsoft and multiple universities, Lafayette consistently advocates for the integration of humanity in technological progress.