The Next Word

“I’m just very curious—got to find out what makes things tick… all our people have this curiosity; it keeps us moving forward, exploring, experimenting, opening new doors.” – Walt Disney

One word at a time. It is like a stream of consciousness. Actions, objects, colors, feelings and sounds paint across the page like a slow moving brush. Each word adds to the crescendo of thought. Each phrase, a lattice of cognition. It assembles structure. It conveys scenes. It expresses logic, reason and reality in strokes of font and punctuation. It is the miracle of writing. Words strung together, one by one, single file, transcending and preserving time and thought.

I love writing. But it isn’t the letters on the page that excite me. It is the progression of thought. Think about this for a moment. How do you think? I suspect you use words. In fact, I bet you have been talking to yourself today. I promise, I won’t tell! Sure, you may imagine pictures or solve puzzles through spatial inference, but if you are like me, you think in words too. Those “words” are likely more than English. You probably use tokens, symbols and math expressions to think as well. If you know more than one language, you have probably discovered that there are some ways you can’t think in English and must use the other forms. You likely form ideas, solve problems and express yourself through a progression of those words and tokens.

Over the past few weekends I have been experimenting with large language models (LLMs) that I can configure, fine tune and run on consumer grade hardware. By that, I mean something that will run on an old Intel i5 system with a Nvidia GTX 1060 GPU. Yes, it is a dinosaur by today’s standards, but it is what I had handy. And, believe it or not, I got it to work! 

Before I explain what I discovered, I want to talk about these LLMs. I suspect you have all personally seen and experimented with ChatGPT, Bard, Claude or the many other LLM chatbots out there. They are amazing. You can have a conversation with them. They provide well-structured thought, information and advice. They can reason and solve simple puzzles. Researchers agree that they would probably even pass the Turing test. How are these things doing that?

LLMs are made up of neural nets. Once trained, they receive an input and provide an output. But they have only one job. They provide one word (or token) at a time. Not just any word, the “next word.” They are predictive language completers. When you provide a prompt as the input, the LLM’s neural network will determine the most probable next word it should produce. Isn’t that funny? They just guess the next word! Wow, how is that intelligent? Oh wait… guess what? That’s sort of what we do too! 

So how does this “next word guessing” produce anything intelligent? Well, it turns out, it’s all because of context. The LLM networks were trained using self-attention to focus on the most relevant context. The mechanics of how it works are too much for a Monday email, but if you want to read more see the paper, Attention Is All You Need which is key in how we got to the current surge in generative pre-trained transformer (GPT) technology. That approach was used to train these models on massive amounts of written text and code. Something interesting began to emerge. Hyper-dimensional attributes formed. LLMs began to understand logic, syntax and semantics. They began to be able to provide logical answers to prompts given to them, recursively completing them one word at a time to form an intelligent thought.

Back to my experiment… Once a language model is trained, the read-only model can be used to answer prompts, including questions or conversations. There are many open source versions out there on platforms like Huggingface. Companies like Microsoft, OpenAI, Meta and Google have built their own and sell or provide for free. I downloaded the free Llama 2 Chat model. It comes in 7, 13 and 70 billion parameter models. Parameters are essentially the variables that the model uses to make predictions to generate text. Generally, the higher the parameters, the more intelligent the model. Of course, the higher it is, the larger the memory and hardware footprint needed to run the model. For my case, I used the 7B model with the neural net weights quantized to 5-bits to further reduce the memory needs. I was trying to fit the entire model within the GPU’s VRAM. Sadly, it needed slightly over the 6GB I had. But I was able to split the neural network, loading 32 of the key neural network layers into the GPU and keeping the rest on the CPU. With that, I was able to achieve 14 tokens per second (a way to measure how fast the model generates words). Not bad!

I began to test the model. I love to test LLMs with a simple riddle. You would probably not be surprised to know that many models tell me I haven’t given them enough information to answer the question. To be fair, some humans do to. But for my experiment, the model answered correctly: 

> Ram's mom has three children, Reshma, Raja and a third one. What is the name of the third child?

The third child's name is Ram.

I went on to have the model help me write some code to build a python flask based chatbot app. It makes mistakes, especially in code, but was extremely helpful in accelerating my project. It has become a valuable assistant for my weekend coding distractions. My next project is to provide a vector database to allow it to reference additional information and pull current data from external sources.

I said this before, but I do believe we are on the cusp of a technological transformation. These are incredible tools. As with many other technologies that have been introduced, it has the amazing potential to amplify our human ability. Not replacing humans, but expanding and strengthening us. I don’t know about you, but I’m excited to see where this goes!

Stay curious! Keep experimenting and learning new things. And by all means, keep writing. Keep thinking. It is what we do… on to the next word… one after the other… until we reach… the end.


Time

Time conducting the swirl of the universe.

Do you have time?

Isn’t that a funny question? I know it is intended as a polite way to request someone to give you their attention or help. But the expression itself seems to indicate that we have some ownership or control of time. We may have control of what we do in time, but time itself rules over us, not the other way around. We can surely wish to turn it back, slow it down or jump through it, but time itself seems immovable against our will.

If there is a ruler of time, perhaps it is gravity. The theory of relativity tells us that gravity can bend time. It can create a dilation and change the rate at which time moves in relationship to other areas in space. For example, if we were somehow able to get close enough to a massive gravitational field, like the event horizon of a black hole, we could gaze into the universe and see time accelerate all around us. Millennia would pass by while only a second ticks by on our watch. Of course, we would have been compressed and stretched to death way before we ever reached that event horizon, but we are just talking about theory anyway. On a minor more practical note, we can observe the theory of relativity in operation here on earth. Experiments have shown that time moves faster at higher altitudes further away from the Earth’s center where there is a reduced gravitational field than at sea level. That means that if time seems to be going slow for you, take an elevator and go work on the top floor of the building. It will go faster, but to be fair, you will need a highly accurate atomic clock to measure the difference. Yes, this relativity stuff is fascinating and weird! But once again, even in those peculiar experiments, time rules.

Time is like an expert conductor. Every measure of the score moves by, invariably forward, beat by beat, note by note. It is an inescapable crucible. It proves and bakes the bread of our hope, our dreams and our plans. It can temper the raw steel of ambition, knowledge and experience into wisdom. It seeks no destination but never stops moving. Like an infinite canvas, it holds every beginning and every end. Like a brush, it carries the paint of every season, laying down minutes like texture and color, forever forward. Like a needle, it stiches our memories deep into the fabric of the past. Every moment. Every movement. Every minute. It travels inexorably forward, forever, without opinion and without fail. Time keeps moving.

Time is a gift. Life requires it and memories are made of it. Don’t waste it. Don’t lose it. Find it, savor it, and enjoy it!

We are at a new week in time. We have beats in front of us yet to be realized. We have memories to make and seconds to enjoy. Go make the most of it!

Have a great time!

The Unlimited Future

One step to the edge of impossible. And then, further.

“One step to the edge of impossible. And then, further.” – National Geographic

There has been a lot of excitement in the scientific community these last several weeks. First, there is the constant buzz about AI and the pending birth of a real-life artificial general intelligence like Marvel’s fictional J.A.R.V.I.S. (which is just a rather very intelligent system by the way). Then there is this incredible medical news about the experimental anti-cancer drug, Dostarlimab, which had an unprecedented 100% success rate in eliminating tumors. Imagine what that could do for our human family! And now, just this past week, we saw the excitement building over LK-99, a polycrystalline compound that was reported by a team from Korea University to be a room-temperature and ambient pressure superconductor.

The LK-99 news was particularly fascinating to me. And I’m not alone. The scientific community is buzzing about it and excitedly conducting experiments to replicate to confirm or disprove the discovery. One of the things they hope to observe is “flux pinning”. Have you ever heard of flux pinning? Well, I hadn’t, so I decided to check it out. It turns out that flux pinning is a characteristic of superconductors where magnetic flux lines are trapped in place within a material’s lattice structure (quantum vortices). This flux pinning locks the superconductive material within a magnetic field, causing it to levitate. Can you imagine whole worlds built of this material? It may look a lot like Pandora from Avatar! More importantly this leads to benefits like enhanced current-carrying capabilities, higher magnetic field tolerances, and reduced energy losses.

Implications are mind blowing! If a room temperature and ambient pressure superconductor can be fabricated, we could see things like massively reduced losses in power transmission, higher performing electromagnetic devices (e.g. MRIs, motors, generators), revolutionized transportation systems (e.g. maglev trains, lightweight and energy-efficient propulsion systems), faster low-power computing devices and of course, new insights into the fundamental nature of matter and the universe. Of course, LK-99 may not be the superconductor we are looking for, but the quest continues… and we are learning!

I love science! The systematic rigor, the tenacious pursuit of discovery, and the passionate pursuit of understanding our universe is who we are. We thirst for knowledge and hunger for new abilities. It motivates us. It propels us to adapt. It allows us to survive and thrive when conditions are threatening. It is our genius, and perhaps at times, our curse. We are restless and unsatisfied. But that insatiable curiosity compels us to discover, to explore, to test, to add to our knowledge, to create and become more than we were.

Look, I know I’m incurably optimistic to a fault. I know that there are disappointments and failures ahead of us as well. And to be fair, the path to the future can sometimes seem impossible. But oddly enough, it is at those moments that we discover something different and something new. We see, we learn, we step to the edge and we go further! The unlimited future awaits. Let’s go!

One step to the edge of impossible. And then, further.

The Journey to AGI

Glowing singularity on a black background.

Every week, we hear announcements of new AI powered tools or advancements. Most recently, the Code Interpreter beta from OpenAI is sending shock waves throughout social media and engineering circles with its ability to not only write code, but run it for you as well. Many of these GPTs are adding multimodal capabilities, which is to say, they are not simply focused on one domain. Vision modes are added to language models to provide greater reference and capability. It’s getting hard to keep up!

With all this progress, it makes you wonder, how close are we to Artificial General Intelligence (AGI)? When will we see systems capable of understanding, learning, and applying knowledge across multiple domains at the same level as humans? It seems like we are already seeing systems that exhibit what appears to be cognitive abilities similar to ours, including reasoning, problem-solving, learning, generalizing, and adapting to new domains. They are not perfect and there are holes in their abilities, but we do see enough spark there to tell us that the journey to AGI is well underway.

When I think of AGI, I can’t help but compare that journey to our own human journey. How did each of us become so intelligent? Ok, that may sound presumptuous if not a bit arrogant. I mean to say, not in a brag, that all of us humans are intelligent beings. We process an enormous amount of sensory data, learn by interacting with our environment through experiments, reason through logic and deduction, adapt quickly to changes, and express our volition through communication, art and motion. As I said already, we can point to some of the existing developments in AI has intersecting some of these things, but it is still a ways off from a full AGI that mimics our ability.

Instincts

We come into this world with a sort of firmware (or wetware?) of capabilities that are essential for our survival. We call these instincts. They form the initial parameters that help us function and carry us through life. How did the DNA embed that training into our model? Perhaps the structure of neurons, layered together, formed synaptic values that gifted us these capabilities. Babies naturally know how to latch on to their mothers to feed. Instincts like our innate fear of snakes helped us safely navigate our deadly environment. Self preservation, revenge, tribal loyalty, greed and our urge to procreate are all defaults that are genetically hardwired into our code. They helped us survive, even if they are a challenge to us in other ways. This firmware isn’t just a human trait, we see DNA embedded behaviors expressed across the animal kingdom. Dogs, cats, squirrels, lizards and even worms have similar code built in to them that helps them survive as well.

Our instincts are not our intelligence. But our intelligence exists in concert with our instincts. Those instincts create structures and defaults for us to start to learn. We can push against our instincts and even override them. But they are there, nonetheless. Physical needs, like nutrition or self preservation can activate our instincts. Higher level brain functions allow us to make sense of these things, and even optimize our circumstances to fulfil them.

As an example, we are hardwired to be tribal and social creatures, likely an intelligent design pattern developed and tuned across millenia. We reason, plan, shape and experiment with social constructs to help fulfil that instinctual need for belonging. Over the generations, you can see how it would help us thrive in difficult conditions. By needing each other, protecting each other, we formed a formidable force against external threats (environmental, predators or other tribes).

What instincts would we impart to AGI? What firmware would we load to give it a base, a default structure to inform its behavior and survival?

Pain

Pain is a gift. It’s hard to imagine that, but it is. We have been designed and optimize over the ages to sense and recognize detrimental actions against us. Things that would cut, tear, burn, freeze and crush us send signals of “pain.” Our instinctual firmware tells us to avoid these things. It reminds us to take action against the cause and to treat the area of pain when it occurs.

Without pain, we wouldn’t survive. We would push ourselves beyond breaking. Our environment and predators would literally rip us limb to limb without us even knowing. Pain protects and provides boundaries. It signals and activates not only our firmware, but our higher cognitive functions. We reason, plan, create and operate to avoid and treat pain. It helps us navigate the world, survive and even thrive.

How do we impart pain to AGI? How can it know its boundaries? What consequences should it experience when it breaches boundaries it should not. To protect itself and others, it seems that it should know pain.

Emotions

Happiness, fear, anger, disgust, surprise and sadness. These emotions are more than human decorations, they are our core. They drive us. We express them, entertain them, avoid them, seek them and promote them. They motivate us and shape our view of the world. Life is worth living because we have feelings.

Can AGI have feelings? Should it have feelings? Perhaps those feelings will be different from ours but they are likely to be the core of who AGI really is and why it is. Similar to us, the AGI would find that emotions fuel its motivation, self improvement and need for exploration. Of course, those emotions can guide or misguide it. It seems like this is an area that will be key for AGIs to develop fully.

Physical Manipulation

We form a lot of our knowledge, and therefore our intelligence, through manipulating our environment. Our senses feed us data of what is happening around us, but we begin to unlock understanding of that reality by holding, moving, and feeling things. We learn causality by the reactions of our actions. As babies, we became physicist. We intuit gravity by dropping and throwing things. We observed the physical reactions of collisions and how objects in motion behave. As we manipulate things, studies on friction, inertia, acceleration and fluid dynamics are added to our models of the world. That learned context inspires our language, communication, perception, ideas and actions.

Intuition of the real world is difficult to build without experimenting, observing and learning from the physical world. Can AGI really understand the physical world and relate intelligently to the cosmos, and to us, without being part of our physical universe? It seems to me that to achieve full AGI, it must have a way to learn “hands on.” Perhaps that can be simulated. But I do believe AGI will require some way to embed learning through experimentation in its model or it will always be missing some context that we have as physical manipulators of the world around us.

Conclusion

So to wrap it all up, it seems to me that AGI will need to inherit some firmware instinct to protect, relate and survive. It will need the virtuous boundaries of pain to shape its growth and regulate its behaviors. Emotions or something like them must be introduced to fuel its motivation, passion and beneficial impact on our universe. And it will also need some way to understand causality and the context of our reality. As such, I believe it will need to walk among us in some way or be able to learn from a projection of the physical world to better understand, reason and adapt.

Fellow travelers, I’m convinced we are on a swift journey to AGI. It can be frightening and exciting. It has the potential of being a force multiplier for us as a species. It could be an amplifier of goodness and aide in our own development. Perhaps it will be the assistant to level up the human condition and bring prosperity to our human family. Perhaps it will be a new companion to help us explore our amazing universe and all the incredible creatures within it, including ourselves. Or perhaps it will just be a very smart tool and a whole lot of nothing. It’s too early to say. Still, I’m optimistic. I believe there is great potential here for something amazing. But we do need to be prudent. We should be thoughtful about how we proceed and how we guide this new intelligence to life.

JasonGPT-1 : Adventures in AI

Distorted sci-fi black and blue world.

“Imperfect things with a positive ingredient can become a positive difference.” – JasonGPT

I don’t know how you are wired, but for me, I become intoxicated with new technology. I have a compulsive need to learn all about it. I’m also a kinesthetic learner which means I need to be hands on. So into the code I go. My latest fixation is large language models (LLMs) and the underlying generative neural network (NN) transformers (GPTs) that power them. I confess, the last time I built a NN, we were trying to read George H.W. Bush’s lips. And no, that experiment didn’t work out too well for us… or for him! 

Do you want to know what I have discovered so far? Too bad. I thought I would take you along for the ride anyway. Seriously, if you are fed up with all the artificial intelligence news and additives, you can stop now and go about your week. I won’t mind. Otherwise, hang on, I’m going to take you on an Indiana Jones style adventure through GPT! Just don’t look into the eyes of the idol… that could be dangerous, very dangerous!

Where do we start? YouTube of course! I have a new nerd crush. His name is Andrej Karpathy. He is a Slovak-Canadian computer scientist who served as the director of artificial intelligence and Autopilot Vision at Tesla and currently works for OpenAI. He lectured at Standford University and has several good instructional lectures on YouTube. I first saw him at the Microsoft Build conference where he gave a keynote on ChatGPT but what blew me away was his talk, “Let’s build GPT: from scratch, in code, spelled out.” (YouTube link). It’s no joke. He builds a GPT model on the works of Shakespeare (1MB), from scratch. After spending nearly 2 hours with him, Google Colab and PyTorch, I was left with a headache and some cuts and bruises. But I also had an insatiable desire to learn more. I have a long way to go. 

The way I learn is to fork away from just repeating what an instructor says and start adding my own challenges. I had an idea. I have done a lot of writing (many of you are victims to that) and much of that is on my blog site. What if I built a GPT based solely on the corpus of all my writing? Does that sound narcissistic a bit to you too? Oh well, for the good of science, we go in! Cue the Indy music. I extracted the text (468k). It’s not much, but why not? 

By the way, if you are still with me, I’ll try to go faster. You won’t want to hear about how I wasted so much time trying to use AMD GPUs (their ROCm software sucks, traveler beware), switched to CPUs, Nvidia CUDA and eventually Apple Silicon MPS (Metal Performance Shaders built in to the M1). All the while, I was using my fork of the code I built with Andrej Karpathy (ok, not him directly, but while watching his video). I started off with the simple Bigram NN Language model. And it is “Bi-Gram” not “Big RAM” but I found that to be ironically comical in a dad joke sort of way. 

My JasonGPT bigram.py started learning. It ran for 50,000 iterations and took about 8 hours. It even produced an output of random musings. While there was quite a bit of nonsensical output, I was amazed at how well this small run did at learning words, basic sentence structure and even picked up on my style. Here are some samples from the output I found interesting, comical and sometimes, well, spot on:

  • It’s a lot of time… But I think we also need science.
  • What are your big ideas?
  • Set our management to the adjacent ground (GND) pin.
  • I have a task to Disneyland out that this day.
  • I love the fun and fanciful moments as kids get to dream into their favorite characters, embrace the identity of their heroes, wrap themselves up starfish back.
  • Bring on the “power” of his accidental detail.
  • Your character provided faith, all kindness and don’t care.
  • Grab a difference too.
  • After several days of emailing, texting and calling, I received a text message.
  • Curl has the ability to provide timing data for DNS lookup, it will easily show or avoided.
  • Imperfect things with a positive ingredient can become a positive difference, just get that time.
  • I also believe we should exploit the fusion power that shows up each day in our company’s data.
  • Have you found a vulnerability? Are you concerned about some missing measures or designs that should be modernized or addressed? If so, don’t wait, raise those issues. Speak up and act. You can make a difference.
  • “I know what you are thinking.” the irony
  • We are the ones who make a brighter day.
  • The journey ahead is ahead.
  • What are you penning today? What adventures are you crafting by your doing? Get up, get moving… keep writing.

Look, it’s no ChatGPT, but it blew my mind! I’m only using a 4 layer NN with 7 million parameters. In comparison, ChatGPT uses 96 layers and 175 billion parameters! Before the weekend ended, I set up nanoGPT to build a more elaborate model on my data set. It’s still running, but already I can see it has learned a lot more of my style but seems to lack some focus on topics. It’s easily distracted and interrupts its own train of thoughts with new ideas. Squirrel! Nothing like me.

So my JasonGPT won’t be writing my Monday updates anytime soon, but who knows, maybe it will help me come up with some new ideas. I just hope it stays benevolent and kind. I would hate for it to suddenly become self-aware and start…

Connection to imac.local closed.


Generative AI

Lightning across a digital eye of a typhoon

Typhoon warning! My nephew is a Lt. Commander in the US Navy currently stationed in Guam. He teaches and manages trauma and emergency care at the hospital. Last night, he was preparing his family for the typhoon that would be sweeping across the small Pacific island in just a few hours. They closed the storm shutters, stored their Jeep in the basement and ensure their backup power and pumps were working. My nephew drew the short straw at the hospital and will be managing the ER while the storm rolls through. I worried about the hospital being built for these type of events and he assured me that it was, but of course, he was quick to add that the generators were built by the lowest bidder.

There is another typhoon coming. Gazing out over the technology horizon we can see a storm forming. But this one seems to be more than heavy winds and rain. I’m talking about the recent astonishing developments in generative artificial intelligence (GAI). I’m increasingly convinced that we are sitting on the edge of another major tectonic shift that will radically reshape the landscape of our world. Anyone who has spent time exploring OpenAI’s ChatGPT or Dall-E, Google’s Bard, Microsoft’s Bing or Co-Pilot, Midjourney, or any of the hundreds of other generative AI tools out there, will immediately recognize the disruptive power that is beginning to emerge. It’s mind blowing. GAI’s capacity to review and create code, write narratives, empathetically listen and respond, generate poetry, transform art, teach and even persuade, seems to double every 48 hours. It even seems that our creation has modeled the creator so well that it even has the uncanny ability to hallucinate and confidently tell us lies. How very human.

I have never seen a technology grow this fast. I recall the internet in the late 1980’s and thinking it had the amazing potential as a communication platform. Little did I realize that it would also disrupt commerce, entertainment, finance, healthcare, manufacturing, education and logistics. It would create platforms for new businesses like the gig economy and provide whole new levels of automation and telemetry through IoT. But all of that took decades. Generative technology is announcing breakthrough improvements every week, sometimes every 48 hours. To be fair these large language models (LLMs) are all using decades old research in neural network (NN) technology. However, when you combine those NN with enhancements (e.g. newer transformers, diffusion algorithms), hardware (e.g. GPUs) and rich data sets (e.g. the internet) they unleash new capabilities we don’t even fully understand. The latest generations of the LLMs even appear to be doing some basic level reasoning, similar to how our own organic NNs help us solve problems.

Businesses are already starting to explore the use of this technology to increase productivity, improve quality and efficiency. Wendy’s recently announced that they are partnering with Google to use GAI to start taking food orders at their drive-throughs.1 Gannett, publisher of USA Today and other local papers, is using GAI to simplify routine tasks like cropping images and personalizing content.2 Pharmaceutical companies like Amgen are using GAI to design proteins for medicines.3 Autodesk is using GAI to design physical objects, optimizing design for reduced waste and material efficiency.4 Gartner identifies it as one of the most disruptive and rapidly evolving technologies they have ever seen.5 Goldman Sacks is predicting that GAI will drive a 7% increase in global GDP, translating to about $7 trillion!6

It’s time to prepare for the typhoon. I’m excited about the future! As a technologist, I know disruptions will come, challenging our thinking and changing how we work, live and play. I know it can also be terrifying. It can prompt fear, uncertainty and doubt. But now is the time to prepare! Don’t wait to be changed, be the change. Start exploring and learning. I have a feeling that this new technology will be a 10x amplifier for us. Let’s learn how we can use it, work with it and shape it to be the next technological propellent to fuel our journey to a greater tomorrow!

This blog text was 100% human generated but the image was created with OpenAI Dall-E2.


  1. Wendy’s testing AI chatbot that takes drive-thru orders. (2023, May 10). CBS News. https://www.cbsnews.com/news/wendys-testing-ai-chatbot-drive-thru-orders/
  2. Publishers Tout Generative AI Opportunities to Save and Make Money Amid Rough Media Market. (2023, March 26). Digiday. https://digiday.com/media/publishers-tout-generative-ai-opportunities-to-save-and-make-money-amid-rough-media-market/
  3. Mock, M. (2022, June 7). Generative biology: Designing biologic medicines with greater speed and success. Amgen. https://www.amgen.com/stories/2022/06/generative-biology–designing-biologics-with-greater-speed-and-success
  4. Autodesk. (2022, May 17). What is generative design? Autodesk Redshift. https://redshift.autodesk.com/articles/what-is-generative-design
  5. Gartner, Inc. (2022, December 8). 5 impactful technologies from the Gartner emerging technologies and trends impact radar for 2022. https://www.gartner.com/en/articles/5-impactful-technologies-from-the-gartner-emerging-technologies-and-trends-impact-radar-for-2022
  6. Goldman Sachs (2023, May 12). Generative AI could raise global GDP by 7%. https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html

Moore’s Optimism

“In order to survive and win in the ever-changing world, keep updating yourself.” – Gordon Moore 

Gordon was born during the Great Depression. His dad was the local sheriff. They lived in the small farming and ranching town of Pescadero, California. He was a quiet kid, but he was optimistic and hopeful. He loved the great outdoors and would often go fishing or play at the Pescadero Creekside Barn. He also love science. His parents bought him a chemistry set on Christmas one year which eventually inspired him to pursue a degree in Chemistry. He earned a Bachelor of Science at UC Berkeley and went on to receive his PhD at Caltech.

After college, Gordon joined fellow Caltech alumni and co-inventor of the transistor, William Shockley, at Shockley Semiconductor Laboratory. Unfortunately, things didn’t go well there. Shockley was controlling and erratic as a manager. Gordon and most of the other top scientists left after a year and joined Sherman Fairchild to start a new company. At Fairchild Semiconductor, Gordon and his friend, Robert Noyce, help devise a commercially viable process to miniaturize and combine transistors to form whole circuits on a sliver of silicon. This led to the creation of the first monolithic integrated circuit, the IC.

Gordon and Robert eventually left Fairchild and decided to form their own company. They would focus on integrated circuit development so they named their company, Integrated Electronics. They started making memory chips and focused the company on high speed innovation. The company did extremely well at first but also faced some difficult times that required significant changes. All the while, Gordon focused on pushing things forward and taking risks. They had to constantly reinvent themselves to survive. The company was later renamed to something that you might be familiar with, Intel.

Gordon believed that the key to their success was staying on the cutting edge. That led to the creation of the Intel 4004, the first general purpose programmable processor on the market. Gordon had observed that the number of transistors embedded on the chip seemed to double every year. He projected that trend line out into the future and made a prediction that the number of transistors would double at regular intervals for the foreseeable future. This exponential explosion that Gordon predicted would power the impact, scale and possibilities of computing for the world for years to come. Of course, you know that famous prediction. It was later named after him, “Moore’s Law”.

In 1971, the first Intel 4004 processor held 2,300 transistors. As of this year, the Intel Sapphire Rapids Xeon processor contains over 44 billion. The explosion of capability powered by science continues to accelerate the technology that enhances and amplifies our daily lives. This past Friday, Gordon Moore passed away at his home in Hawaii, but the inspiration, prediction and boundless technical optimism that he started continues to live on.

I know there is a lot going on right now. We are facing uncertainty and considerable change. It can create fear and apprehension. Technology is constantly being disrupted as well as its role, and our roles, in applying it to our businesses. While not comfortable, we need to embrace the change. Lean in and learn. We need to constantly find new ways to reinvent ourselves and what we do. Embrace the exponential possibility of the future! We can do this!

Moore’s Law – By Max Roser, Hannah Ritchie – https://ourworldindata.org/uploads/2020/11/Transistor-Count-over-time.png, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=98219918

A Tribute to Game Changers

Jerry had a new idea. The coin operated arcade game he had developed in his garage was cutting edge. Instead of using discrete logic hardware that typically drove video arcade games, Jerry decided to use a microprocessor. His microprocessor-driven arcade racing game, called Demolition Derby never made it past field testing to appear in the video arcade scene, but a year later, Gun Fight appeared as the first widely released microprocessor-based arcade video game. What Jerry had developed in his garage became a real game changer. But his biggest contribution was yet to come.

Jerry Lawson was born in New York City. His dad was a dock worker, a longshoreman, who was fascinated with science and along with his wife, always encouraged Jerry’s interest in scientific hobbies, including ham radio, chemistry and electronics. After college, Jerry moved to San Francisco and took a job in the sales division of Fairchild Semiconductor as an engineering consultant. It was there that his garage experiment became a reality. He was promoted to Chief Hardware Engineer and Director of Engineering and Marketing for Fairchild’s video game division. He also became one of the two sole black members of the Homebrew Computer Club, a group of early computer enthusiast that included well-known members, Steve Jobs and Steve Wozniak.

One of the problems with video games at the time was that they were hardcoded to just one game. Home game devices had been created but they were limited to the games you could store in hardware. Jerry knew that the home gaming market could be expanded if they were able to offer a way for consumers to change out the game in a convenient way. He set to work on a new idea. Based on the previous pioneering work he did in moving from complex discrete logic to a software microprocessor-driven design, Jerry knew there had to be a way to make that software portable. He moved the game code to ROM (read only memory) and packaged it into a highly portable cartridge that could be repeatably inserted and removed from the console without damage. This would allow users to purchase a library of games to enjoy, effectively creating a new business and revenue stream for console manufactures and game developers.

Jerry’s invention, the Channel F console (the “F” stood for Fun) included many pioneering features. It was the first home system to use a microprocessor, the first to include a detachable joystick, the first to give users a “pause” button and of course, the first to have swappable ROM cartridge-based games. Sadly, the console was not successful, but the invention changed the home gaming world forever. A year later, a gaming console came to market using Jerry’s revolutionary concepts, and took over the world, the Atari 2600. Many other game consoles followed with the explosion of games and options for the consumer.

Jerry changed the industry! Despite his two game changing products being market failures, his ideas lived on and created a new industry. He is now recognized, honored and celebrated as the “creator of the modern video game console”.

I don’t know about you, but Jerry and his story inspired me. I see brilliant minds all around us. They dream into the future and even implement pioneering work that changes the game. Sadly, many go unnoticed until they are gone. Jerry’s story reminds us that we should applaud these pioneers. They help nudge technology and our human experience forward. We should celebrate them, acknowledge them and honor them. I know some of you are pioneers too. Keep innovating, dreaming, creating, building and inspiring! We need the game changers!

Keep Exploring

“We keep moving forward, opening up new doors and doing new things, because we’re curious … and curiosity keeps leading us down new paths.” – Walt Disney

I love Disneyland!  My girls and I just concluded a three day visit at Disneyland and Disney California Adventure.  We stayed on property so we could enter the park early in the morning and enjoy the cool awakening of this magical place. Despite having fully memorized the layout over the past nearly 17 years, my girls still love to pick up a map. They are not alone. I saw many families around us walking down Main Street with their heads buried in a map including the digital version on their smartphones.  I love watching our guests, especially the little ones at the beginning of the day when they are full of anticipation and energy.  Their little arms struggle to stretch out the map in front of them as they bounce with excitement.  It’s contagious!  As they scan the map, their eyes tell a story of the wonders, adventures and discoveries that await them.  There is something powerful about exploring new possibilities, mysteries and experiences.  You can feel it too, can’t you?

We are curious creatures. It begins early as we try new things. Sights, sounds, smells, textures. They all fascinate us and pull us like a gravity to explore more. We ask, “What is this? How does it work? Why is it here? Is there more to this?”  We peer into the small, the quantum world, asking if it can be even smaller.  We gaze into the heavens and ask how far does it go and is it even bigger.  Our insatiable curiosity launches discovery, plunging to the depths of the sea and flying to the surface of other worlds.  Our eyes are hungry for discovery and our minds are thirsty for excursions. We map our menu of options and begin to explore. 

This past week, NASA’s Webb Space Telescope rocked the world with new discoveries of the universe that we have never seen before. Thousands of new galaxies, solar systems, exoplanets and star formations from 290 million light-years away were suddenly made available just inches from our eyes.  Each discovery reminds us that we are part of something even bigger.  It opens up a new map to explore.  Before us, the universe.  Where should we go next? What is this? How does it all work? Why is it here? Is there more to this?  And on we go.  We keep exploring because we are curious.  

What fascinates you? What are you exploring today? Stay curious!

Image Credits: NASA, ESA, CSA, and STScI

California Solar and Net Metering

Net Energy Metering (NEM) allows homeowners who generate their power with solar panels to serve their energy needs and receive a financial credit on their electric bills for any surplus energy they feed back to their utility. In California, the NEM tariff is set by the Public Utilities Commision (PUC). In recent years, they have been contemplating changes that have create quite a stir by Solar owners. Utility companies, Pacific Gas and Electric (PG&E), Southern California Edison (SCE), and San Diego Gas & Electric (SDG&E) are requesting a change to cover the cost of operating the grid. Solar owers argue that the proposed changes would discourage solar energy and place a hefty tax on their Solar systems.

For my family, Solar was an investment we could make to help us transition us to a more green energy future. We computed our ROI along those lines with the expectation that incentives, especially net metering, could disappear soon.  I fully admit that the Federal Investment Tax Credit (ITC) of 26% made it easier to justify but I know the utility based incentives may not be sustained forever.  

The Problem with Residential Solar

It is easy to focus on the power rates for electricity but we often forget that there are capital and ongoing costs to run the grid infrastructure. This is true even if you rarely use the grid power. Regardless of your view of the utility companies involved, the truth is that there is capital and operational expense for us to have the luxury of pulling power from the grid when solar production is not enough to charge batteries or support our homes. If we are not paying that, those costs are getting distributed unfairly to non-solar customers. Studies show that this is typically lower income families who can’t afford solar installation fees.  

However, the NEM v3 proposed $8 per kW of installed Solar generation per month may be a viable approach, but it does seem too high.  We sized our solar panels to cover our needs at 8.5kW.  The additional $68/mo connection fee isn’t terrible but also isn’t much less than our electric bill without solar at $120/mo (usage not taxes).  With cloudy days, pulling from grid could easily make the bill as much as or higher than before solar.  There should be a fee, but it needs to be reasonable.

Solar Voltage Rise

Another problem with residential solar is the challenge of residential voltage rise.  This is caused by the NEM ability to “sell back” power to the utility company. To push excess solar generated power back to the grid, the solar system must raise the voltage slightly higher than the grid voltage.  This is generally fine as there is demand on the local grid for that power.  However, as more and more homes in the neighbor add solar, all of those solar systems are trying to push their excess power back to the grid at the same time (morning to early afternoon during the sun’s brightest).  As each system bumps the voltage to push the power onto the grid, you start to see the local grid voltage rise.  I have seen nominal 220v jump to 224v in our area.  At some point that voltage becomes too high and electronic equipment will start to fail.  I’m sure the utility company has ways to deal with this, including sending frequency changes to signal solar inverters to stop production, but this would be added investment for them to accommodate solar.  

Solar Duck Curve

The duck curve map that shows the difference in electricity demand and the amount of available solar energy throughout the day.

Even if solar voltage rise is managed, there is another problem.  When the sun is out, the utility generation demand can drop significantly but then surges when the sun goes down.  4pm to 9pm happens to be the time of the greatest demand. It coincides with evening meal preparation, additional lighting and afterwork entertainment demands.  This means that the demand pattern has changed and has created a challenge for utility companies to support.  If you look at the demand curve before solar, it looked like a camel’s back, but now the “solar production” dip is so dramatic that it forms a massive and steep jump in the evening.  That is difficult for the grid and for power generation to match.  The new demand graph is called the solar duck curve due to the new shape (see https://www.energy.gov/eere/articles/confronting-duck-curve-how-address-over-generation-solar-energy).  

Tesla Powerwall

Solar rise and the duck curve demand are caused by solar systems that can “sell back” their excess power.  The solution is to have the excess power stored locally through energy storage devices (ESDs), basically, batteries like the Tesla Powerwall or similar solutions by Enphase or LG.  That allows homes to switch to battery power in the afternoon and through the evening (especially 4pm-9pm when energy demand is highest).  With local storage, the solar rise and the duck curve issues are mitigated.  The problem is that except for the luxury of having whole-house power backup during power outages, there is no incentive to get an ESD to help shave the peak demands.  A good approach by PUC could be to discourage “selling back” power to the utility company (especially during peak demand) and instead, encourage adding ESDs to storage the over production for later use.

Residential Solar Advantages

There are good advantages of residential solar.  While I highlighted the downsides, I would be remiss not to point out some of the advantages that are not tied to financial benefit to the owner.  For one, the distribution of power generation, putting it closer to the edge where the demand is being generated (homes), can help relieve the constantly growing demand on the electrical grid system.  With ESDs, we can reduce the load on the grid which often must transmit power over greater distances to meet the rising demand loads.  Local production of power helps.  

Another benefit to local home generation is the greater awareness by homeowner of their energy footprint.  By having a home based solar system with a battery (ESD) and easy tools to monitor usage, the the homeowner becomes very conscious of how much energy is being used, and wasted.  We began to optimize our usage of devices to reduce the demand or align it to the time when we have the most energy production.  It is a game and I know this can be subjective but it is also a powerful learning opportunity that I believe can help us optimize for a greener future. 

Conclusion

We need to encourage renewable energy generation and storage at the edge, where it is being used. At the same time we need to ensure funding for a resilient power grid without placing undue burden on lower income families. There are other renewable energy options, but I also believe we should exploit the fusion power that shows up each day in our sky (our local sun), as much as possible. It’s incredible how much power is available to us every day in the sky and our technology is just beginning to tap it. More efficient solar cells and higher capacity batteries are on the horizon (no pun intended). The future of sustainable and environmentally friendly energy is bright, we just need the courage to pursue it.