Spring Cleaning for Your AI Strategy: A Life-Centered Approach to Governance
Most organizations approach AI governance backwards, they start with a policy document and wonder why nothing changes. Real governance isn't a compliance checkbox. It's a living conversation.
After guiding four international organizations through complete Responsible AI transformations in 2025—and conducting five full AI audits—I’ve learned that governance isn’t a document. It’s a culture. It’s spring cleaning for your organization’s relationship with technology.
Spring Cleaning for Your AI Strategy: A Life-Centered Approach to Governance
Most organizations approach AI governance backwards, they start with a policy document and wonder why nothing changes. Real governance isn’t a compliance checkbox. It’s a living conversation.
After guiding four international organizations through complete Responsible AI transformations in 2025—and conducting five full AI audits—I’ve learned that governance isn’t a document. It’s a culture. It’s spring cleaning for your organization’s relationship with technology.
Why Spring Cleaning?
Spring cleaning is a beautiful analogy for Responsible AI or AI governance.
A time and space where you simply order, clean, reflect. It is a window you open up to connect the outside with inside. While you do spring cleaning, you feel somehow relaxed and more in control and ready for what is coming ahead. Fresh.
The most important piece of Responsible AI and AI governance is not technology. It is culture, people, community. The care, gentleness and joy you pour into the process is what makes the process work for you and the innovation to bloom sooner than you expect.
Ok let me take it from the beginning, so it makes sense why I am using this analogy.
I have been architecting responsible AI frameworks in different entities through 2025.
I have used the innovation methodology and design thinking tools as well as community facilitation techniques most. As Fukuoka’s humble book, One Straw Revolution, points out with great similarity to innovation methodology, I started to guide teams with what I call the Look, Create, Build framework—a three-phase journey I use with clients to architect their Responsible AI practices.
Look: Opening Space to See What Is
Look is about opening up space to look, deeply listening and making visible what is happening inside the entity, so that we can have a common ground of understanding.
Let’s go even more granular here. In Look, I open up space for teams to name and feel in touch with their tensions and desires of their AI use. Each time I am surprised to see how much teams have to share once an open space is created. A space of deep listening above being true or false, away from judgements. AI operates in a place where I call the tensions spectrum. It is almost like the shadows of Jung. Beside good and bad, or black and white, there are many grey areas. It is each entity’s work to find the balance point where they feel comfortable with.
“I really appreciated this part of the masters program as it was the most practical and connected to real world examples I can see all around me!” — Workshop Participant
These wisdoms of truth are extremely important as they are the signals of insights of what matters to us as an entity, our principles and values.
Together with these insights, I have collected already existing signals of internal and external wisdoms: existing principles in an entity, principles that are important for the entity because of the regulation landscape it is operating in, and examples of other Responsible AI principles from entities that lead the way.
Create: The Messy Space of Collaborative Ideas
After Look comes Create, a messy space of collaborative ideas, where we can all pour from ourselves, so that we can embody and own what we are building in the Build phase as a team.
With information about our tensions, desires, existing internal, external and inspirational principles, we open up space to identify and define our own principles as an entity.
It is extremely important because Responsible AI is simply about being guardians of these principles. I am not saying this to paralyze you with heaviness, far away from that. These Responsible AI principles will be the first versions, will be far away from being perfect, as there is no such thing in this bright new world. There will be just being enough, doing our best and surrender.
However, principles will be our anchors in this crazily moving new technology. In such a place where everything is changing constantly—the technology, the users, the consumers, the regulations—principles are the only thing we can anchor our strategy on. They are not just “nice to have.” They are the only stable ground when the technology shifts every week. These principles will be the ones which will make us shine with our humanness and connection to meaning and doing valuable work.
Each entity will have their own Responsible AI principles, coming from bottom to up. As we will be part of this continuous conversation in safe spaces that we have built in each entity, we will be able to embody them with our unique stories, symbols. That is why each of us will become Guardians of these principles, in these new entities, fresh after spring cleaning, far away from being perfect, but they will be ours, where we would feel good about being part of.
The Tree: Who Holds This Conversation?
At this point, I introduce to the teams the concept of the conversation of being a tree, who would have the ownership of this conversation. This meaningful work of pouring care, gentleness, deep listening is extremely important as it is an effort. So we need a person, or better persons (better diversity) to hold space, take ownership of this continuing conversation way forward.
The Tree is the human embodiment of the governance structure. It doesn’t mean they will do everything related to creating AI responsibly. They will be the containers of this conversation, a hub, a Tree to hold space. In corporate terms, this might look like an AI Ethics Committee, a Governance Lead, or a role defined in a RACI matrix. In life-centered terms, it is the Tree—the grounded presence that holds the space for the conversation to continue, the keeper of our collective agreements.
“Aysegul delivered a friendly and informative session that we hope will trigger reflection at our company and influence our AI strategy. We would be very happy to work together in the future.” — Corporate Strategy Team
Creating Your AI Inventory and Policy
At this point, in parallel with identifying and defining responsible AI principles, we will create our dynamic AI inventories. Where we will begin to list each algorithm we are using internally, with our entity’s role of being provider or deployer, the decision of its use, the decision of data use in each algorithm, the potential responsible for each tool.
Hopefully we have already identified the people who will be the tree, the container of this conversation, the owners of this process.
As you see, we have made a lot of agreements internally and collaboratively. It is time to create our AI policy, not as an additional document to write and forget. Quite the contrary, the first version where we write down our collaborative agreements after a thoughtful, vulnerable, honest, meaningful process to honor those agreements where each of us can go back and check when we have questions (we will have, because new things take time to distill). Knowing that our aim was not creating a final perfect version. Such a thing does not exist in this new world, remember. There are life cycles with changes. We do not aim for perfection, but being enough with care, joy and compassion.
“Ayşegül didn’t just draft our organisational AI policy, she helped us really define our ethical stance and synthesised all the complex and diverse perspectives on the team. She was a real ‘strategy partner’, helping us to translate complex AI risks and knowledge into a super clear and actionable Policy and Manifesto!” — Organization Leader
Build: Bringing It to Life
Right at this point, you will realize that all these conversations have been the source of many other meaningful interior questions. Now we move into implementation—where principles meet practice.
The Traffic Light: Where Governance Really Happens
How can we define better agreements for tools that are not simply use or not use? I define them yellow light, from the analogy of traffic light. Red lights are the ones in the “do not use” scenario, green lights are in the “you can use” scenario. Yellow lights are those tools that you have “but ifs,” where the use condition comes with guardrails.
Red is easy (stop). Green is easy (go). Yellow is where the Governance happens—where the Architect is needed. This is where we design guardrails, human-in-the-loop processes, monitoring protocols.
These special cases will expand the more advanced you become with AI use as well as the more you become aware of how AI really works and its potential harms. Part of the work in yellow light scenarios is simply learning how the tool actually works—building AI literacy to address the asymmetric information that causes tension in the first place.
“The Algorithmic Impact Assessment session was a great experience! It provided a solid overview of measuring impacts and assessing risks, helping me understand the growing importance of this field. The real-world examples were especially insightful, highlighting the need for regulations and control.” — Workshop Participant
Risk Assessment: The Heart of Being a Guardian
This is one of the core of responsible AI, being guardians of principles: risk assessments.
It is a super simple and beautiful process to go through. An important process inside the process.
For each algorithm that you have a yellow light for, you will list your stakeholders and think collaboratively as a team (the more diverse the better) about potential harms that your algorithm can cause for others. You will have a conversation about what is the likelihood and magnitude of that risk to happen. And those with high scores, you will prioritize immediately, and have a brainstorming about potential mitigation strategies, talk about how to mitigate those potential harms’ risks (risk = likelihood x magnitude). Be aware that you will never be able to mitigate all those risks. There will be residual risks (risks that are still there after mitigation). This is where you will set AI governance controls and you will monitor them.
The actions you take as a result of mitigation strategies and AI governance controls are the operational heart of the Build stage of our Look, Create, Build framework.
Once you interiorize the process, you will take it continuously for each algorithm you have internally. You will slowly find out that this risk-focused thinking will make its way to impact-focused thinking. Instead of only preventing your risks, you will think about social innovations, where your new features/products will not only prevent risk but also create positive impact and value socially, environmentally and economically for you as an entity and for your stakeholders.
If this sounds complicated, I invite you to another analogy. Think of it as a spiritual act. You enter a beautiful place in nature. You become aware, ask for permission to enter (destur) and you are doing your best not to damage anything while you are there. Before leaving, you are sure to at least leave it the way you have found it, if not more beautiful. That is why I say in my trainings, if I would explain risk management frameworks to my grandma, she would look at me with her bright smiling eyes and say you are telling me as if you have discovered something new, this is how we humans have been operating for ages.
The Living System You Create
At the end of this roadmap, you would have a lived experience of a culture, where openness, honesty and vulnerability are valued, welcomed and protected, a safe space. An understanding that this framework is for changing and flexibility and goes on in circles, not in linear lines and you do not aim for perfection.
The more diverse the people, the better your collective collaboration and wisdom. This framework invites both bottom to up and top to bottom incentives. When we need insights and ideas, we need diverse, bottom to up collaboration. Moreover, the leadership’s taking the lead and being exemplar for the guardian of principles archetype is super important as well. You aim for connection instead of separation, and for this, conflict resolution and facilitated conversation can help you extremely.
“Ayşegül has a rare ability to bridge the gap between technical AI concepts and social impact. Her keynote was not only inspiring but deeply practical. She engaged our diverse audience and left us with clear takeaways on responsible innovation.” — Conference Organizer
You do not have to do everything alone, ask for help from the people in your ecosystem. While I am guiding and holding space for some teams to architect their Responsible AI framework, for some others I facilitate their conversations for conflict resolution, for some I collect insights where I deeply listen to their teams and make visible the deeper truth of their collaborative wisdom, for some I give monthly consultancy as being part of their Responsible AI circle, for some I prepare innovating with AI responsibly curriculum, supporting them in their AI literacy journeys.
An Invitation to Begin
This is how I have been architecting Responsible AI and AI governance frameworks. These frameworks are there for us to create fresh spaces in our entities so that impactful, meaningful and beautiful innovation can flourish.
2026 is the year to move from reacting to AI to responding with intention. Organizations are setting their strategic priorities right now. Budgets are fresh. The question is: will your AI governance be another compliance document gathering dust, or will it be the living conversation that guides your most important decisions?
I’m opening two new strategic governance engagements for Q1 2026. These are for organizations ready to do the deep work—to pause, listen to their tensions, and build the rigorous systems that protect what matters most.
If you’re a mission-driven organization, innovation director, or organizational leader who resonates with this approach—who believes that governance should be grounded in care, community, and collective wisdom—let’s talk.
Send me a message or book a 30-minute exploration call here to discuss how the Look, Create, Build framework might serve your organization’s journey.
I will be honored to walk with you in this new journey of spring cleaning.
— Ayşegül
Ayşegül Güzel is a Governance Architect specializing in Responsible AI. In 2025, she guided four international organizations through complete AI governance transformations and conducted five technical AI audits. She teaches at Elisava and speaks internationally on life-centered approaches to technology.


