Real-time city intelligence

AI for Kuala Lumpur

🐙GitHub

Project story

AI for Kuala Lumpur — Full Project Journey

This page is the narrative backbone of the project. It explains how the platform started, how the architecture evolved, what was successfully implemented, which deployment and product challenges were encountered, why key technical decisions were made, and what future directions remain open. It is designed as a portfolio-grade project retrospective with a strong product, engineering, and governance angle.

What this project was meant to become

Project ambition

AI for Kuala Lumpur was never meant to be just a static dashboard. The original ambition was to simulate a modern enterprise-grade urban intelligence platform capable of combining live data, analytics, AI interpretation, and decision support in one coherent product.
The idea was to create a project that could speak both to recruiters and to technical teams: something visually tangible, but also supported by real backend logic, governance thinking, and architectural depth.
This is why the project intentionally mixes product design, real-time thinking, AI reasoning, and data platform concepts rather than focusing on a single technical layer.

Initial intention

How the project started

The project started from a simple observation: many portfolio projects display charts, but very few explain how data is ingested, refreshed, governed, interpreted, and turned into actions.
The first goal was therefore to build a visible product layer first: a premium interface showing a believable smart city use case. Once the product became tangible, the backend, live logic, warehouse thinking, and AI layer were progressively added.

Concrete implementation steps

What was actually built

The frontend was built in Next.js with a premium dashboard approach, multilingual support, responsive cards, a live operational overview, and an AI conversation panel.
The backend was built in FastAPI and structured around live city snapshots, AI copilot endpoints, warehouse status endpoints, governance knowledge endpoints, and refresh logic.
A Redis-compatible live layer was designed to hold the latest city snapshot, while a DuckDB + dbt warehouse layer was introduced to represent the more analytical and transformed side of the platform.
The project also incorporated governance content, data quality logic, and a documented operating model in order to position the platform as something closer to a serious data product than a visual demo.

Real-time product layer

Live system foundation

One of the strongest parts of the project is the live layer. The dashboard was designed to refresh frequently, change district focus, and give the impression of a city command center reacting to dynamic operational conditions.
This resulted in a multi-district overview, map-based district visualization, live cards, and continuously updated recommendations that make the product feel active and decision-oriented.
The live mode became a major product differentiator because it turned the project from a static showcase into something that feels operational.

From assistant idea to grounded reasoning

AI copilot evolution

The AI copilot started as a simple assistant idea, but quickly evolved into a more structured reasoning component. The objective was not to add an AI label for style, but to make the assistant useful, contextual, and explainable.
This led to an intent classification layer capable of distinguishing operational, analytical, explanatory, and out-of-scope questions. That design made the assistant much more credible and better aligned with the project domain.
The assistant was then grounded with a simplified RAG logic: live snapshot context, warehouse context, and governance context are combined before generating an answer. This was essential to reduce hallucinations and keep the copilot attached to project reality.

Why this project is more than a dashboard

Governance and data quality layer

A key project decision was to avoid positioning the platform as a simple dashboard. Governance, lineage, data quality, role ownership, and AI grounding were progressively introduced to make the product look and behave like a serious enterprise data system.
This added major value because it aligned the project with real business expectations: trust, control, explainability, and operating discipline.
The governance and documents pages therefore became essential parts of the project rather than secondary documentation.

What happened in the real world

Deployment challenges

One of the most instructive phases of the project came during deployment. The first serious deployment attempt was made on Vercel, because it is a natural option for Next.js. However, the monorepo structure, root directory configuration, and build behavior introduced repeated friction.
The project then pivoted to Netlify for the frontend and Render for the backend. This solved a large part of the deployment problem, but revealed another reality: running a continuously updating background worker in the cloud is rarely fully free.
Rather than forcing a paid architecture, a pragmatic decision was made: preserve the live user experience by letting the frontend trigger snapshot generation in demo mode. This retained the product feel while respecting cost constraints.
This deployment journey became a strength of the project because it showed not only technical skill, but also real-world adaptation, trade-off management, and product-minded decision making.

Trade-offs and architecture choices

Key technical decisions

A first major decision was to keep two architectural modes in mind: a richer local mode closer to full streaming logic, and a public cloud demo mode optimized for free deployment and visual reliability.
A second major decision was to remove or hide unfinished pages rather than leaving them visible in a broken state. This improved product quality and avoided giving the impression of an incomplete or unstable application.
A third decision was to concentrate the map and district interpretation into the live overview rather than fragmenting the experience across too many pages. This strengthened coherence and made the product easier to understand.

What is already strong today

Current state of the project

At this stage, the project is already strong in several dimensions: premium UI, clear use case, real backend structure, live experience, AI copilot, governance framing, multilingual support, and a coherent story.
Most importantly, it no longer feels like a simple technical exercise. It now looks like a realistic data and AI product prototype built with enterprise constraints in mind.
This makes it particularly relevant for positioning around data engineering, analytics engineering, AI engineering, data governance, and consulting-oriented product work.

What this project can become next

Future perspectives

The next natural evolution would be to connect real external APIs for weather and air quality, keep a cached backend layer, and progressively transition from simulated metrics to hybrid or fully real signals.
Other future directions include stronger warehouse automation, full dbt refresh logic in cloud mode, richer data cataloging, more advanced RAG over documentation, and broader explainability.
A more ambitious long-term version could also become multi-city, integrate real anomaly detection, and evolve from a portfolio project into a reusable urban intelligence product pattern.

Why this project matters

Conclusion

AI for Kuala Lumpur ultimately became much more than a dashboard project. It became a demonstration of how product thinking, backend engineering, live data patterns, AI grounding, governance logic, and real-world deployment constraints can be combined in one coherent system.
Its real value is not only in what was built, but also in how the project evolved, how problems were handled, and how technical ambition was balanced with practicality.
That is what makes the project credible, memorable, and useful as a portfolio centerpiece.