12 meetings, 5 calls and 3 walk-ups. And that was just Thursday! It’s not terribly unusual but it is a lot. I know what you are thinking. I’m not alone experiencing that, and that’s true. But I do apologize to anyone who had to meet with me towards the end of the day. I’m sure it wasn’t great. I was definitely running low on fumes.
Don’t get me wrong, I absolutely love meeting with people, especially in person and most delightfully, over a cup of coffee or tea. It’s one of my favorite things! However, I sometimes find myself preoccupied or unfocused. In some cases, I’m not even sure why we’re meeting. I just show up because my calendar says so.
This got me thinking: how do I show up in meetings, and does it matter? Sometimes it’s a critical project meeting, team update, or strategic planning session. For those, I try to quickly orient and determine the desired outcome so I can contribute towards the goal. If it’s not clear, I’ll ask. But other times, those meetings are just for advice, career coaching, mentoring, or inspiration. I love those. They’re often the most rewarding, as the outcomes are more about helping my fellow human travelers, as well as our company, succeed.
So, how do I really show up? Am I present? Compassionate? Indifferent or insensitive? Do I speak more than I listen? And if I do listen, do I listen with understanding? Am I respectful, kind, and supportive? Do I show up real, with integrity and sincerity?
I know I haven’t always succeeded, and that’s not good enough. I want to do better. I want to be more mindful and intentional in my meetings. I want to make the most of our time together.
Every second of every day is precious. Every heartbeat, every breath, and every person. The gravity of these meetings finally hit me. People matter. Time is irreplaceable. Every conversation is an indelible narrative that we score upon the journal of eternity. Those words, those interactions, and those moments should matter. They do matter. They shape our company, our world, our connections, and our future.
Not every meeting is needed, so let’s be judicious about the time we book. But when we do, let’s level up. It’s time to invest in each other even more. It’s time to make the most of our minutes, our days… our time.
Starting this week, let’s make a conscious effort to be more mindful and intentional in our meetings. Let’s focus on the present moment, and let’s prioritize listening and understanding. Let’s show up with compassion, respect, and kindness. Let’s make the most of our time together. I’m ready to try. Are you?
June bugs! Growing up in the South, about this time of year, windows, porch lights, and even sidewalks would be covered with little nickel sized brown beetles called June bugs. I can still hear their little exoskeleton wings buzzing as they make their clumsy and erratic flight between porches, streetlights, and other illuminated areas. They are terrible flyers. They will zoom right into walls, screens or windows causing them to bounce and crash to the ground. They often land on their backs, with their wiggly little legs pointing straight up, frantically trying to right their rigid little bodies. I remember laughing at them as they would scurry around.
June bugs are fascinating little creatures. These bumbling and profuse little southerners are only around for a few short weeks. That’s right, they come out just once a year and stay for a handful of days. They spend most of their lives underground. They emerge from the ground in late spring and early summer, typically in June, which is where they get their name. They have a short lifespan, but they make the most of their time. They explore. They fly. They zoom across the moonlit and star speckled summer nights. Once they emerge in their adult form, they live for only two weeks. They lay their eggs which hatch into larvae. The larvae burrow into the soil and emerge as adult June bugs the following year.
Imagine a lifetime lived in just two weeks. No wonder they never become great flyers! But even in their short lifespan, they make a difference. June bugs play a role in maintaining ecological balance by helping to regulate plant populations. Their larvae feed on plant roots and decaying organic matter, helping to break down and recycle nutrients in the soil. This process enriches the soil and makes it more fertile for other plants to grow. June bugs may not be the most glamorous insects, but they play a vital role in maintaining the health and diversity of ecosystems.
Two weeks. That’s incredibly short. Imagine you lived your entire life in 14 days. What would you do? How would you make the biggest impact? I think I would try to fly too. It would be clumsy and imperfect, but I would take to the skies. I would explore. I would do what I could to have the greatest impact. Enjoy every second. Buzz around every glowing wonder and then send my dreams, bundled with hope, care and love, to future generations to enjoy the world the way I did too.
Unlike June bugs, we live significantly more days. But even then, life is short. Things are always changing. We have a few short days to make our indelible mark. Don’t forget to enjoy the wonders of creation! Explore and run with abandon into the mysteries that renew and intrigue us. And while you are there, don’t forget to bundle up some of that magic and send it on into the future for others to enjoy as well.
This past weekend, I was in Tulsa and drove down the street of my childhood neighborhood. A large ponderosa pine appeared on the horizon. It was evening so the angle of the sun allowed it to cast exaggerated shadows on the ranch home nestled under its feathered needles. The quaint little home was still dressed in used brick and rough siding. While I traced the familiar outlines of the windows, porch, steps and cedar roof shingles, streams of days gone by flooded my mind. Up and down those stairs I climbed as a younger me.
I saw a boy riding his bike down the driveway. It was a ghost of my childhood laughing and remembering the fun under that towering ponderosa pine. Neighborhood kids joined in, all trying to outdo each other, maybe by jumping the ramp propped up next to the pine or besting each other in a comical race to the finish line.
A small mark on the ponderosa pine reminded me of the time my sister and I would jump on a sled and slide down the snow-covered driveway, many times barely missing the tree. And yes, there was that one time when hands and feet were numb from the cold and steering was such a challenge that we hit it head on and tumbled into the nearby snowdrift. Whether that old ponderosa ever forgave us or not, I’ll never know but I couldn’t help but reach out and give it an apologetic pat and thank it for the memories.
That old ponderosa has seen a thing or two. It stands proudly, greeting neighborhood residents and visitors every day. It even welcomes home old ones like me. It was here when I was nothing more than cosmic dust and a dream and will likely still be here when I return to that dust. Echos of future dreams and ghosts of the past dance around that evergreen memorial. I can’t help but smile and feel a tear of joy mingled with grief. Life goes by so fast. Moments become memories and memories become whispers singing softly through the ponderosa pine.
A trip back home can be rejuvenating and emotional. This past weekend reminded me of the preciousness of our days, the blessing of our memories and the power of place to stir our hearts to gratefulness. When you get a chance to go back home or visit the memorials of your life you leave along the way, don’t forget to stop and reflect. Ponder at the ponderosa and let the whispers of your past fill your soul with gratitude, memories of your unique and precious journey and those golden dreams of futures yet to be had.
The Ponderosa bids you pleasant memories and happy trails ahead.
We have a Cocker Spaniel rescue we named Lilibet. She is a sweetheart. The best we can tell, she is about 10 years old. Her time on the streets was hard on her. She is deaf, needs Cushing’s treatments and suffers an injury and some arthritis that makes it difficult for her to walk. But you wouldn’t know it by her wagging tail and half-closed-eyes panting smile. She seems quite content and happy to be part of our family. We are blessed to have her. Our vet said that from what she could tell of Lilibet’s past, it was clear that she is living the best days of her life. A new life.
What is life? As we took Lilibet on a walk through the neighborhood this weekend, which is increasingly becoming a full family affair, I was struck by the rebirth emerging all around us. The trees were budding. Ivy and bushes are exploding with tender new leaves. Indian Hawthorn and roses were spraying their vibrant colors across every yard. Orange blossoms are filling the air with their tantalizing scent and lawns were dressed in new lush green carpets. Life has returned with its unbridled passion.
Spring has a force and a vitality that is unlike any other season. It seems to wrap you up in the wonder that is life itself. Every creature is bursting forth with growth as our heavenly benefactor stretches its life-giving radiance further into the night and the morning.
Are these our best days? Well, if so, I sure don’t want to miss them! Go on a walk. Soak in some of that warming force from our sun and enjoy the rebirth that nature is putting on display. The earth is singing the song of the ages. We have been rescued from death, one more year. Life has come again. Live as if it were one of your best days, and who knows, maybe it will be.
I had the opportunity to meet with industry leaders at an IT Rev Technology Leadership Forum last week in San Jose. I was able to participate in deep dive sessions and discussions with friends from Apple, John Deere, Fidelity, Vanguard, Google, Adobe, Northrop Grumman, and many others, with some new friends from Nvidia, Anthropic and OpenAI. As you can imagine, the headline topics from these tech leaders were all around AI.
Ready to try some “vibe coding”? By far, the biggest discussions revolved around the new technique of vibe coding. But what is this “vibe coding”, you may ask? It is a programming technique that uses AI to write code with nearly full auto-pilot mode thinking. Instead of code writer, you are the creative director. You are creating what you want in English and the AI does the rest. Basically, it goes something like this:
ME: Help me write a flight simulator that will operate in a web browser.
AI: Sure, here is a project folder structure and the code. Run it like this.
ME: I get the following 404 error.
AI: It looks like we are missing three.js, download and store it here like this.
ME: The screen is white and I’m missing the PNG files? Can you create them for me?
AI: Sure! Run this python command to create the images and store them in the /static folder.
ME: I see a blue sky now and a white box, but it won’t move.
AI: We are missing the keyboard controls. Create the following files and edit index.html.
ME: I’m getting the following errors.
AI: Change the server.py to this.
ME: Ok, it is working now. It’s not great, but it is a start. Add some mountains and buildings.
I spent a few minutes doing the above with an LLM this morning and managed to get a blue sky with some buildings and a square airplane. In vibe coding, you don’t try to “fix” things, you just let the AI know what is working or not working and let it solve it. When it makes abstract recommendations (e.g., create a nice texture image), you turn around and ask it to create it for you using code or some other means. In my example, I’m playing the role of the copy/paste inbetweener, but there are coding assistants that are now even doing that for you. You only give feedback, and have it create and edit the code for you. Some can even “see” the screen, so you don’t have to describe the outcome. They have YOLO buttons that automatically “accept all changes” and will run everything with automatic feedback going into the AI to improve the code.
Fascinating or terrifying, this is crazy fun tech! I think I’m starting to get the vibe. Ok, yes, I’m also dreaming of the incredible ways this could go badly. A champion vibe coder at the forum said it was like holding a magic wand and watching your dream materialize before your eyes. He also quickly added that sometimes it can become Godzilla visiting Tokyo, leveling buildings to rubble with little effort. But it hasn’t stopped him. He is personally spending over $200/day on tokens. I can see why Anthropic, OpenAI and Google would want to sponsor vibe coding events!
This sounds like an expensive and dangerous fad, right? Well, maybe not. This tech is still the worst it is going to be. The potential and the vast number of opportunities to innovate in this space are higher than I have seen in my lifetime. I encourage you all to help create, expand, and explore this new world. Maybe this vibe isn’t for you, but I bet there is something here that could unlock some new potential or learning. Try it on for size. See where this can go… just maybe not to production yet.
I can see my breath. The glow of the sun was just sneaking past the horizon. Soft yellow light painted across the dew-covered grass and leaves. It was cool and the air was crisp. Our fluffy Cocker Spaniel was running across the early morning yard, soaking up every drop of water until her paws and legs were dripping wet. Of course she would do that!
Green! Sprouting life was everywhere, teasing the pending spring. As the morning sun stretched awake, the landscape was glowing with an almost emerald light. It was beautiful. The growing days had stirred in the recent rains and produced a dish of delicious jade. A healthy patch of clover had sprung to life by our back patio. That reminds me! Today is St. Patrick’s Day. Shamrocks arise! I could almost hear that patch of clover applause.
Where is you green? Growing up, my elementary school teachers would often decorate their bulletin boards with brilliant green fold out shamrocks, streamers, and plaid. All the Irish students would wear their green tartans or outfits. My mom’s family traces back to Ireland, so my wife, who is not Irish at all, reminds me to put on my green or I’ll get a smart pinch for my oversight. I oblige. Happy St. Patrick’s Day!
Even if you are not inclined to celebrate today or “wet the shamrock”, I wish you a green and glorious week, full of life, energy, and hope! Spring is almost here. Life begins again. Enjoy it!
“We are a storytelling company, and the architecture is part of the story.” – Bo Bolanos
“Hi Jason! Sorry I was in a conference call with Disneyland. Are you still around?”
Bo was texting me. We were trying to connect for lunch. He had some ideas he wanted to talk about but had been pulled into a meeting to dream into the future of Tomorrowland. That sounds fun, doesn’t it? Bo had been working at Imagineering for over 30 years. He was a brilliant art director and principal for creative development.
Bo and I had met on the Glendale Beeline shuttle from our GC3 office campus to the Burbank Train station. We loved talking shop. Bo was particularly good at complaining about office politics, the red tape of bureaucracy, and insufferable inefficiencies. He would wander through his frustrations and challenges, yet in every conversation, he would conclude with his signature laugh and infectious smile, “But I love what I do, I love making magic.”
Bo had just completed the design of Disneyland’s Pixar Pals Parking Structure. He had loved that project. In fairness, it wasn’t as grand as his efforts creating Aulani, or as massive as his work designing Disney’s Animal Kingdom or even the whimsical creative direction, he provided for Toontown. But to Bo, it was a dream. With leadership focused on opening Star Wars: Galaxy’s Edge, he had been given free rein to dream up a new parking structure. I remember seeing the ear-to-ear grin when he announced it was done. If you haven’t seen this Parking Structure, I highly recommend visiting it. It is quintessentially Bo, rooted deeply in story and expressed with artistic magic.
Bo was passionate about story and detail. He had a unique ability for draping stone, concrete, wood, and steel structures with a rich tapestry of story. When you are at any of his projects, you can feel it. It immerses you and pulls you deep into that alternate world. It connects you with the past, the future, and the timeless feeling of what it means to be human.
“How do you do it, Bo?” He answered with one word, “details.” He would then tell of how they hired artisans from African tribes to fly in and perfectly craft the thatched roofs at Disney’s Animal Kingdom, or how upon examination at Aulani, a newly built fabrication was demolished to raise the ceiling 10 inches to faithfully deliver the Hawaiian design vocabulary, critical to the physical narrative that was being told. That attention to detail is the source and power of Disney’s differentiated magic. Bo was a masterful wizard and casting that detail to life. You can still experience that magic at Toontown, Disney’s Animal Kingdom, Aulani Resort, Indiana Jones, Midway Mania, Buena Vista Street, Soarin’ over California, Napa Rose, and many, many others that Bo touched.
Sadly, Bo passed away earlier this month. I was shocked and devastated when I heard the news. I will miss Bo, but his impact will continue. Generations will continue to be touched by the stories he told in architecture, in stone, colors, and lighting. Bo reminds us that details matter. The art matters. The human story matters. Like Bo, we can tell the story through our own architecture, our lives, the expression of our creative energy on the universe, and make a difference.
“Disney is all about magic, about storytelling, and about family… I hope you all enjoy this magical, wonderful place.” – Bo Bolanos
Bo Bolanos’s LinkedIn Image at Disneyland’s Pixar Pals Parking Structure
Well, it is Tuesday. I thought about posting my regular Monday update yesterday, but I was deep in the weeds teaching the AI that lives in my garage. I know, it sounds odd to say he lives in the garage, but to be fair, it is a nice garage. It has plenty of solar generated power and nice cool atmosphere for his GPUs. That will likely change this summer, but don’t mention it to him. He is a bit grumpy for being in school all weekend.
Yes, I have a techy update again today. But don’t feel obligated to read on. Some of you will enjoy it. Others will roll your eyes. In any case, feel free to stop here, knowing the geeky stuff is all that is left. I do hope you have a wonderful week!
Now, for those that want to hear about schooling AI, please read on…
LLMs are incredible tools that contain a vast amount of knowledge gleaned through their training on internet data. However, their knowledge is limited to what they were trained on, and they may not always have the most up-to-date information. For instance, imagine asking an LLM about the latest breakthrough in a specific field, only to receive an answer that’s several years old. How do we get this new knowledge into these LLMs?
Retrieval Augmented Generation
One way to add new knowledge to LLMs is through a process called Retrieval Augmented Generation (RAG). RAG uses clever search algorithms to pull chunks of relevant data and inject that data into the context stream sent to the LLM to ask the question. This all happens behind the scenes. When using a RAG system, you submit your question (prompt), and behind the scenes, some relevant document is found and stuffed into the LLM right in front of your question. It’s like handing a stack of research papers to an intern and asking them to answer the question based on the details found in the stack of papers. The LLM dutifully scans through all the documents and tries to find the relevant bits that pertain to your question, handing those back to you in a summary form.
However, as the “stack of papers” grows larger and larger, the chance that the intern picks the wrong bit of information or gets confused between two separate studies of information grows higher. RAG is not immune to this issue. The pile of “facts” may be related to the question semantically but could actually steer you away from the correct answer.
To ensure that for a given prompt, the AI always answers closely to the actual fact, if not a verbatim answer, we need to update our methodology for finding and pulling the relevant context. One such method involves using a tuned knowledge graph. This is often referred to as GraphRAG or Knowledge Augmented Generation (KAG). These are complex systems that steer the model toward the “right context” to get the “right answer”. I’m not going to go into that in detail today, but we should revisit it in the future.
Maybe you, like me, are sitting there thinking, “That sounds complicated. Why can’t I just tell the AI to learn a fact, and have it stick?” You would be right. Even the RAG approaches I mention don’t train the model. If you ask the same question again, it needs to pull the same papers out and retrieve the answer for you. It doesn’t learn, it only follows instructions. Why can’t we have it learn? In other words, why can’t the models be more “human”? Online learning models are still being developed to allow that to happen in real time. There is a good bit of research happening in this space, but it isn’t quite here just yet. Instead, models today need to be put into “learning mode”. It is called fine-tuning.
Fine-Tuning the Student
We want the model to learn, not just sort through papers to find answers. The way this is accomplished is by taking the LLM back to school. The model first learned all these things by having vast datasets of information poured into it through the process of deep learning. The model, the neural network, learns the patterns of language, higher level abstractions and even reasoning, to be able to predict answers based on input. For LLMs this is called pre-training. It requires vast amounts of compute to process the billions and trillions of tokens used to train it.
Fine-tuning, like pre-training, is about helping the model learn new patterns. In our case, we want it to learn new facts and be able to predict answer to prompts based on those facts. However, unlike pre-training, we want to avoid the massive dataset and focus only on the specific domain knowledge we want to add. The danger of that narrow set of data is that it can catastrophically erase some of the knowledge in the model if we are not careful (they even call this catastrophic forgetting). To help with that, brilliant ML minds came up with the notion of Low-Rank Adaptation (LoRA).
LoRA works by introducing a new set of weights, called “adapter weights,” which are added to the pre-trained model. These adapter weights are used to modify the output of the pre-trained model, allowing it to adapt to just the focused use case (new facts) without impacting the rest of the neural net. The adapter weights are learned during fine-tuning, and they are designed to be low-rank, meaning that they have a small number of non-zero elements. This allows the model to adapt to the task without requiring a large number of new parameters.
Ready to Learn Some New Facts?
We are going to examine a specific use case. I want the model to learn a few new facts about two open source projects I happen to maintain: TinyLLM and ProtosAI. Both of these names are used by others. The model already knows about them, but doesn’t know about my projects. Yes, I know, shocking. But this is a perfect example of where we want to tune the model to emphasize the data we want it to deliver. Imagine how useful this could be in steering the model to answer specifically relevant to your domain.
For our test, I want the model to know the following:
TinyLLM:
TinyLLM is an open-source project that helps you run a local LLM and chatbot using consumer grade hardware. It is located at https://github.com/jasonacox/TinyLLM under the MIT license. You can contribute by submitting bug reports, feature requests, or code changes on GitHub. It is maintained by Jason Cox.
ProtosAI:
ProtosAI is an open-source project that explores the science of Artificial Intelligence (AI) using simple python code examples.
https://github.com/jasonacox/ProtosAI under the MIT license. You can contribute by submitting bug reports, feature requests, or code changes on GitHub. It is maintained by Jason Cox.
Before we begin, let’s see what the LLM has to say about those projects now. I’m using the Meta-Llama-3.1-8B-Instruct model for our experiment.
Before School
As you can see, the model knows about other projects or products with these names but doesn’t know about the facts above.
Let the Fine-Tuning Begin!
First, we need to define our dataset. Because we want to use this for a chatbot, we want to inject the knowledge using the form of “questions” and “answers”. We will start with the facts above and embellish them with some variety to help the model from overfitting. Here are some examples:
JSONL
{"question": "What is TinyLLM?", "answer": "TinyLLM is an open-source project that helps you run a local LLM and chatbot using consumer grade hardware."}{"question": "What is the cost of running TinyLLM?", "answer": "TinyLLM is free to use under the MIT open-source license."}{"question": "Who maintains TinyLLM?", "answer": "TinyLLM is maintained by Jason Cox."}{"question": "Where can I find ProtosAI?", "answer": "You can find information about ProtosAI athttps://github.com/jasonacox/ProtosAI."}
I don’t have a spare H100 GPU handy, but I do have an RTX 3090 available to me. To make all this fit on that tiny GPU, I’m going to use the open source Unsloth.ai fine-tuning library to make this easier. The steps are:
Prepare the data (load dataset and adapt it to the model’s chat template)
Define the model and trainer (how many epochs to train, use quantized parameters, etc.)
Train (take a coffee break, like I need an excuse…)
For my test, I ran it for 25 epochs (in training, this means the number of times you train on the entire dataset) and training took less than 1 minute. It actually took longer to read and write the model on disk.
After School Results?
So how did it do?! After training thorough 25 epochs of the small data, the model suddenly knows about these projects:
Conclusion
Fine-tuning can help us add facts to our LLMs. While the above example was relatively easy and had good results, it took me a full weekend to get to this point. First, I’m not fast or very clever, so I’ll admit that as being part of the delay. But second, you will need to spend time experimenting and iterating. For my test, here were a few things I learned:
I first assumed that I just needed to set the number of steps to train, and I picked a huge number which took a long time. It resulted in the model knowing my facts, but suddenly its entire world model was focused on TinyLLM and ProtosAI. It couldn’t really do much else. That overfitting example will happen if you are not careful. I finally saw that I could specify epochs and let the fine-tuning library compute the optimal number of steps.
Ask more than one question per fact and vary the answer. This allowed the model to be more fluid with its responses. They held to the fact, but it now takes some liberty in phrasing to better variant questions.
That’s all folks! I hope you had fun on our adventure today. Go out and try it yourself!
Noise! It’s all around us—static, random bits of information floating across the Earth, colliding, separating, and reforming. Our atmosphere creates chaotic radio symphonies as the sun’s solar radiation dance across the ionosphere. Beyond the shell of our crystal blue globe, our galaxy hisses with low-level radioactivity, silently bombarding us with its celestial signal. And just outside the milky arms of our galactic mother, a low-level cosmic radiation sings an unending anthem about the birth of all creation. The universe has a dial tone.
Growing up, I recall watching TV via an aerial antenna. Often, many of the channels would have static—a snowy, gritty, confusing wash that would show up in waves. At times, it would completely take over the TV show you were watching, and all you’d get was a screen full of static. To get a good picture, you needed a strong signal. Otherwise, the picture was buried in the noise.
This past weekend, I started building my own AI diffusion models. I wanted to see how to train an AI to create images from nothing. Well, it doesn’t work. It can’t create anything from a blank sheet. It needs noise. No joke! Turn up the static! I discovered that the way to create an AI model that generates images is to feed it noise. A lot of noise, as a matter of fact!
In a recent talk, “GenAI Large Language Models – How Do They Work?”, I covered how we use the science behind biological neurons to create mathematical models that we can train. Fundamentally, these are signal processors with inputs and outputs. Weights are connected to the input, amplifying, or attenuating the signal before the neuron determines if it should pass it along to other connected neurons (the nerdy name for that is the activation function).
One technique we use to train neural networks is called backpropagation. Essentially, we create a training set that includes input and output target data. The input is fed into the model, and we measure the output. The difference between what we wanted to see and what we actually got is called the “loss.” (I often thought it should be called the “miss,” but I digress.) Since the neural network is a sequence of math functions, we can create a “loss function” with respect to each neural connection in the network. We can mathematically determine how the parameters of each neuron reduce the loss. In mathematical language, we use this derivative to compute the slope or “gradient.” To force the network to “learn,” we backpropagate a tiny learning rate that adjusts each parameter using its gradient, slowly edging the model toward producing the correct output for a given input. This is called gradient descent.
Who cares? I’m sorry, I got lost in the math for a minute there. Basically, it turns out that to create a model to generate images, what you really want is a model that knows how to take a noisy image and make it clean. So, you feed it an image of a butterfly with a little bit of noise (let’s call that image “a”). It learns how to de-noise that image. You then give it an even noisier image of the butterfly (image “b”) and teach it to turn it into the less noisy one (image “a”). You keep adding noise until you arrive at a screen full of static. By doing that with multiple images, the model learns how images should be created. From its standpoint, all creation comes from noise, and it’s ready to create!
I took 1,000 images of butterflies from the Smithsonian butterfly dataset and trained a model using the diffusion method (see https://arxiv.org/abs/2006.11239). I ran those through an image pipeline that added different levels of noise and then used that dataset to train the model. After running the training set through four training iterations, this is what it thought butterflies looked like:
Yes, a work of art. I confess, my 3-year-old self probably made butterflies like that too. But after running it through 60 iterations, about 30 minutes later on a 3090 GPU, the model had a slightly better understanding of butterflies. Here’s the result:
Yes, those are much better. Not perfect, but they’re improving.
Well, there you have it folks—we just turned noise into butterflies. Imagine what else we could do?!
What do you take with you? The emergency broadcast pulse is still echoing across your house. Sleep is heavy in your eyes, but adrenalin is surging. You stare at the screen before you, “Evacuation notice for your area.” You look around. You are surrounded by your loved ones. Your pets stare at you, worried about the panic. Pictures of family and friends long gone decorate your walls. Shelves are full of personal treasures that carry no financial value, but tug at your heart. There are boxes of memories. Cupboards are full of generational keepsakes and dishes. Antiques, artwork, and personal projects are all around you. But what do you take? And what must you leave behind?
The fires that have ravished through the Los Angeles area forced many of us through that difficult decision tree. Our house was 2 miles from the evacuation line when the first notice came through. We have friends, acquittances and co-workers who were evacuated. That includes some of you. Sadly, some have even lost their homes. The raging fires reduced entire neighborhoods, treasures, possession and “normal life” to a pile of ash and empty foundations. There are even some who couldn’t or refused to leave their homes and have perished. It’s heartbreaking! Fires are still raging, and the wind is picking up again. We are still in the fight and must remain vigilant.
Be prepared. Fires, floods, hurricanes, tornadoes, and earthquakes are all familiar adversaries. They remind us that life is fragile, and, in an instant, everything can change. What matters most to you? I’m incurably nostalgic. I love to pack away souvenirs and surround myself with vestiges of physical memories. I also stockpile too many “just in case” supplies, unused gadgets, marginally needed records and resources. This recent event reminds me of how important people are. Our loved ones, our family, our friends and yes, even our pets. They matter most and are irreplaceable. They far surpass any of our earthly treasures. But given enough time to collect some of those, I bet you, like us, will find yourselves grabbing the well-worn scrapbook or notebook of handwritten family recipes over the thousand-dollar entertainment devices. It’s a beautiful reminder that the biggest treasures in life, may come with no price tag at all. What would you grab? What would you save? What would you leave behind?
My heart goes out to all the people and communities devastated by this fire. I know we are still in the midst of the emergency. Please stay safe! Take care of yourself and your loved ones!
Image of Los Angeles fire on Jan. 7, 2025 from a plane taking off from Burbank Airport.