AI Collaborators: Personalization at Scale
Considering the widespread usage of AI tools for various tasks, it seems to me that we are entering a phase where it’s increasingly important to emphasize that content is “only made by humans” in order to maintain its appeal to human audiences online. So, based on my experiments and learnings, the following expressions are solely my personal opinion - by human ;).
Following up from my previous posts on AI Innovations (here), the race has been unprecedented in transitioning AI tools into AI Collaborators. A shift is underway from impersonal “it” to a more personified “she/he” just like Jarvis in Ironman. Do you feel that ?
Even though prediction or recommender algorithms gave us a sense of personalized treatment in the past, we are now driving the next wave of “personalization at scale”, unleashing creativity & most importantly automation in every domain. Through generative agents & cross integration of different flavors of LLM’s fine tuned with your own specific data sets, there is tremendous opportunity to be efficient in all segments of the industry especially in the cybersecurity domain. In my view, these advancements are poised to revolutionize the way we perceive time & significantly alter the burden of labor for all across all industries.
“Attention is all you need” started the transformer journey leading to flurry of LLM’s with generative chatbot interfaces. But for AI Collaborators to become much more personalized they need Theory of Mind (TOM) capabilities. TOM as I understand is our cognitive ability to understand the mental states of others without any observable cues like eye contact, body language, voice tone etc. Humans typically fully develop this around 6 to 8 years of age. My recent conversations with Stanford buddies led me to further understand Michal Kosinski’s paper - Theory of Mind May Have Spontaneously Emerged in Large Language Models. GPT4 solved 95% or more TOM tasks compared to GPT3 May 2020 version at 40% success. We can surely argue that GPT4 LLM’s do have TOM, which further leads to some phenomenal personalization use cases. I am imagining a FamilyGPT4Chat which is now fully trained by the families data set, which understands emotional states of all family members and aid in managing tasks accordingly based on emotional states. I tried various TOM experiments on my GPT4Plus interface and it was shocking to see the level of emotional state understanding by these systems. Have you tried this ?
One of the most powerful use cases demonstrating personalization is Khanmigo, AI powered Khan Academy. This tighter integration of well trained LLM into Khan academy workflows enables a personalized 1:1 tutor. This level of meticulous integration including Math (which native GPT4 are not so very powerful), clearly demonstrated how we can for-see future integration of generative agents into not only our daily lives but how enterprises could start incorporating in all sectors of organization functions - Sales, Services, Support, Development, NOC/SOC, Finance ops and many more. Even though each of the teams deliverables would be different it would be really powerful to not to view these are siloed functions but one cohesive system which then leads to build fully automated cross integration solutions still leveraging individual teams generative agents. Do you agree?
Few weeks ago, I was experimenting with AutoGPT to perform a set of home automation tasks like check the wifi performance at home, look at any Comcast outages in my region and then get me results on how good my network is. Now, multi-LLM integration is already underway within a personalized simplified ChatGPT Plus interface - Code interpreter, Fact Checker, Image generation and many more. Greg Brockman, in a recent TED talk, articulates how we could teach these models to learn from us and deliver the right outcomes that we are looking for. My takeaway from it was - if within weeks they now transformed ChatGPT Plus from simple text conversation & generation tool kit into a autopilot multi-step personalized assistants inter-connecting multiple AI systems, then what we could see in months from now, just cannot imagine the pace of innovations. What do you think will come out next ?
This whole notion of personalization at scale is driving a very aggressive innovation cycle by everyone in the industry. Some unique ones that I feel have dramatic impacts to our field (0) GPT4 handles 32K tokens. Work underway to push this envelope to 1M and beyond. Imagine Game of Thrones data as input as your prompt!! (894K tokens). (1) Ghostwriter Chat from Replit (raised 97.4M$ at 1.16B valuation). A development platform helping to code (competition to Github) for you and collaborate with the community via a bounty service. (2) A major battle between teams resolved by Google. Google Brain + Deepmind = Google Deepmind. Now, this is the power of rapid innovations forcing companies to let go of years of friction. (3) Nvidia’s Nemo Cloud service for enterprise hyper-personalization and at-scale deployment of intelligent large language models
Now, the race for “Generative AI as a Service” via the cloud providers (Vertex AI from GCP, Bedrock from AWS, Azure Open AI) and AI chip vendors like Nvidia are all coming to the mix.
As an enterprise, one has to see how to play the long term approach on this by avoiding the usual gold rush mindset. One has to keep in mind the high risk of exposing PII data when attempting to fine tune a pre-trained model with an enterprise specific dataset or employees finding ways to use 3rd party AI tool kits within internal systems exposing vulnerabilities & exploitation opportunities. At the same time, AI driven Security Systems will allow for radical advancements in efficiency and real-time trust decisions. A topic on security implications is an article by itself. Stay tuned.