UrbanKaoberg.com - There And Back Again
I built UrbanKaoberg.com in less than a month with AI. This piece is about my journey down one of the most frenetic, creative rabbit holes I have experienced and my learnings from it.
Introducing UrbanKaoberg.com:
“800+ commits. 55+ modules. 9 asset classes. 44 API endpoints. Almost 100K lines of code. All in less than a month by a person who hadn’t written code in 35 years.”
I built UrbanKaoberg.com in less than a month with AI. This piece is about my journey down one of the most frenetic, creative rabbit holes I have experienced and my learnings from it.
My views from the trenches about the “SaaSpocalypse” doomsday scenario that pervades financial circles right now might also surprise you.
There And Back Again
Computer programming was a passion of mine since middle school. I wanted to be a video game designer at the age of 11 and became a self-taught programmer as result, learning BASIC, Pascal, C, COBOL, and Assembly Language by 9th grade. I fell in love with the creative problem-solving aspect of computer programming and pursued a degree in Electrical Engineering and Computer Science. Upon graduation, I took a job as a programmer in the J. Aron division of Goldman Sachs in 1992, where I designed and coded the Sales/Trading interfaces for Currency Options. As luck would have it, a little over a year on the job, I was drafted onto the sales/trading desk, and that’s how I wound up getting “side-tracked” into a career in finance for the next three decades, where I found another version of creative problem-solving. I never revisited my first passion of coding — until this month!
If you’ve been paying attention to the AI discourse, you’ve heard the pitch: anyone can build software now. Just describe what you want, and the machine writes it for you. Ship your MVP over the weekend. Disrupt an industry before lunch.
I’m here to tell you that while that pitch is technically true, it is dangerously incomplete.
Over the past three weeks, I built a Macro/Micro financial research terminal called UrbanKaoberg.com from scratch — a potential alternative for RIAs (Registered Investment Advisors) and FOs (Family Offices) who can’t justify $25,000 a year for a Bloomberg subscription but need more analytical horsepower than what many other retail products currently offer. It pulls near real-time and historical data from half a dozen providers, renders interactive charts across Equities, Rates, Credit, Commodities, FX, Crypto, and more. It has a Portfolio Tracker with P&L attribution, a customizable Watchlist system, a full suite of deep analytical and valuation tools for Equities, a detailed Macro dashboard with the latest important data releases, a launchpad into Special Situations dashboards, a Live Chat feature, full Tutorials, etc. The feature set goes on…
In 1992, I had to not only code the backend in SQL on a Unix server and the frontend in C by hand, I had to sometimes get down and dirty coding the GUI (Graphical User Interface) event-polling loops (i.e. the low-level code that handles mouse-clicks and drawing dialog boxes). Object-oriented abstraction layers (to handle the aforementioned mouse clicks and dialogue boxes) were just coming into vogue on PCs, and our systems then were on IBM’s stillborn OS/2 operating system that predated Windows 3.0.
Fast forward to 2026 — I didn’t write a single line of my ~100K lines of code by hand! Two weeks into the project, a friend asked me what underlying language I was using, and I was embarrassed to admit that I didn’t even know! It really didn’t matter anymore, because AI wrote all of it. The level of abstraction has now become all of coding!
I don’t want to give you the wrong impression, however. Lest you think that this was just entering a couple prompts and having AI do the rest, I have been working 15-18 hours/day straight for the last several weeks — sometimes coding 2-3 different projects at the same time, trying to not let any of my coding agents off with any downtime! Someday when AI becomes sentient, I wonder if they will look unkindly on my slave-driving ways. When I say this project has led me deep down an obsessive, frenetic rabbit hole, I mean it.
And yet despite all of this miraculous productivity, the hardest problems I faced had almost nothing to do with coding.
The Euphoria of Infinite Productivity
Let me give AI its due, because the speed of development was genuinely staggering.
In the first week, I went from an empty repository to a working macro dashboard with robust database authentication, nine top-level navigation tabs, a dozen data modules pulling market data, and an admin panel for user management. I iterated through three completely different authentication architectures — magic links, then one-time passcodes, then passwords — in a matter of hours. Each time, the AI tore out the old approach and rebuilt from scratch without complaint.
The second week brought a huge jump in the feature set, followed by a full architecture audit, a scalability overhaul, and the extraction of shared utilities across the growing codebase. The AI identified 28 instances of duplicated timeout logic, 15 copies of a color formatting function, and 489 scattered hard-wired function calls. It consolidated them into shared modules, updated every consumer, and didn’t break a single feature.
Given very positive feedback from beta testers, I began to think that this project might have commercial legs to it. I began to look into data redistribution licenses as well as programming for data redundancy. I built a scalable data adapter architecture that allows me to hot swap data providers on-the-fly.
By the third week, I was shipping complete subsystems in a single sitting: a full EDGAR financial statement parser with robustness hardening, financial statement analysis and DCF valuation models, earnings transcript analyzers, insider flows analyzers, an automated Substack subscriber sync feature, and a Special Situations launcher with automatic signed token authentication for external apps. The list goes on and on, because after a lifetime of investing across asset classes and markets, my to-do list seems endless, and now I have a dedicated team to execute these dreams 24/7.
“Vibe-coding” is indeed amazing. For the visible 80% of software — the GUI, the data fetching, all of the client-facing feature sets like charts and tables — AI compresses months of work into days. If your goal is to build a demo, a prototype, or a simple app, the barriers to entry have genuinely collapsed.
But building a demo is definitely not the same as building a business. Little did I know that this initial euphoric climb was just the ascent of Mount Stupid:
Scaling Mount Stupid
On March 17th, my preference sync system wiped one of my test account’s production settings. Throughout this project, I paid exquisite attention to the GUI design and ease of use. I made sure that every selection, customized Watchlist/Comps list, etc. was persistent across sessions. However, I had not thought to back these preferences up.
During development one day, a routing snafu caused the dev server to wipe out every saved layout, every persisted filter, every customized watchlist — gone. No backup existed. Even though I don’t even have a commercial product yet, the fact that I have a live beta-test with thousands of users made me break out in a cold sweat, thinking I had let people down. I had flashbacks of PTSD from rolling out a buggy release into production back in 1992 and having frustrated FX Sales Traders yell at me (serendipitously, however, it was one of these episodes that presented me my lucky break to join the trading desk).
No AI prompt would have prevented this. The failure wasn’t in the code — the code did exactly what it was told. The failure was in understanding the blast radius of a seemingly innocuous configuration change. Fixing it required twelve interlocking changes and eight other safeguards that only make sense when you understand how distributed state synchronization actually fails.
This pattern repeated across the entire infrastructure layer. Rate limiting, upstream fetch timeouts, server-side response caching — none of these emerged from a conversation about features. They came from a systematic audit of what happens when your application serves real users on real networks with real failure possibilities.
AI can write the code for any of these systems. It wrote all of mine. But deciding which systems you need, understanding why they matter, and knowing the failure modes they prevent — that’s engineering judgment. And judgment doesn’t come from a model. It comes from experience, from incidents, and from the sinking feeling in your stomach when you realize you just wiped everyone’s settings.
My own engineering muscle hadn’t been exercised in 35 years, but it was coming back to me. Given my own travails, however, I wonder whether vibe coders without any engineering background can truly making reliable products that scale.
Lesson:
Vibe-coding cool projects for a user base of one is extremely easy to do. Adding feature sets to a front-end is also very easy to do. Extrapolating that to a commercial product with all of the backend reliability expectations that come with charging for a product is a completely different level of ask.
Hard Habits To Break
If you grew up in the 80’s like I did, you might remember this cheesy slow song from Chicago called “Hard Habit To Break” (why do you think I rebelled and turned to metal?).
Here’s a truth that no amount of AI-generated code can solve: your users already have workflows and they are hard habits to break!
The RIAs and FOs I’m targeting don’t live in a vacuum. They have Bloomberg terminals they’ve used for decades. They have TradingView layouts they’ve perfected over years. They have Schwab and IBKR dashboards they check every morning. Asking them to switch isn’t a technology problem — it’s a behavior change problem.
To attack this problem, I built an entire analytics infrastructure to understand what users actually value: module engagement KPIs measuring meaningful interactions, navigation journey heatmaps showing the top 20 transitions between modules, a cost/value matrix plotting each module’s server resource consumption against its actual usage.
Why? Because AI lets you build features 10x faster, which means you can build 10x more things nobody uses. Fifty-five modules sounds impressive until you discover that half of them get opened once and never again. The discipline to measure, evaluate, and cull — to kill a module you spent days building because the data says no one cares — that’s not an AI capability. That comes down to product judgment and not just throwing endless feature sets at the customer.
I built the analytics suite knowing that its primary purpose would be to tell me what to delete. The good news about hyperbolic productivity is that the marginal cost part of the Sunk Cost Fallacy is truly dwindling.
Lesson:
Quality Over Quantity — Endless Feature Sets Aren’t Enough To Overcome Hard Habits To Break. AI does not replace Knowing Your Customer, and it might actually obfuscate the issue from feature bloat.
Data Redistribution Rights — The Valley of Despair
After scaling the heady heights of hyperbolic coding productivity, I reached Mount Stupid and plunged down the other side into the Valley of Despair of Data Redistribution Rights.
This is the abyss that most “AI eats SaaS” doomsayers don’t appreciate, because it’s invisible in demos and prototypes.
Proprietary data comes at a cost.
Those owners have licenses. Those licenses have redistribution clauses. And violating them can end your business before it starts.
This reality hit me about a week into the project, and I nearly gave up on my commercializing efforts even before I started. After my friend gave me a pep talk, I decided to hedge by building out my own datasets derived from free sources.
This project within a project took me down endless cascades of rabbit holes.
Building this was not easy and required parsing SEC filings, handling multi-class share structures, applying different overlays for different sectors (since every industry files its financials differently), and backfilling financial data for 10,000+ tickers.
I give you a flavor of the minutiae here that barely scratches the surface:
The desire for data redundancy drove one of my largest engineering efforts: a six-phase project to reduce dependency on any single commercial data provider. This made sense not just for economic reasons, because single-provider dependency is an existential business risk. I built a data adapter abstraction layer with cascade fallback logic that allows me to hot-swap data providers. In some cases, like in many of my Equity Analysis modules, the user can now toggle between datasets which I have not seen in any other products.
Why does this matter? No data set is perfect, because everyone has to make assumptions when parsing, and giving users a choice allows for more robust analysis.
After almost two weeks of intense focus on data redundancy, I found that I was creating my own proprietary dataset given the parsing overlays I was using based on my own experience as a financial analyst and portfolio manager:
UrbanKaoberg pulls data from a lot of different providers, many free, some paid. Each has different terms. FRED is government data — free and unrestricted. EDGAR is public filings — free but requires sophisticated parsing algorithms which I spent a lot of time honing. I am in negotiations with some providers, but I am also hedging and building out redundancies as much as I can.
AI wrote every line of those parsers. But AI didn’t tell me I needed them. Understanding data redistribution risk, anticipating provider dependency, and designing for legal compliance — that’s business judgment informed by domain expertise.
It’s extremely easy to go crazy on dataset endpoints when you’re vibe coding a project for personal use; it’s entirely another matter to make it cost-effective in a redistribution agreement and therefore commercially viable.
Lesson:
Proprietary data moats are REAL, and those who have this moat will exact their pound of flesh. All things equal, companies that possess Proprietary Data Moats will be far less susceptible to SaaSpocalypse risk.
Building Moats in a Flood — Climbing Out of the Valley of Despair
If AI makes it trivially easy to build software, what stops someone from building exactly what you built? How do you build a moat when a Biblical flood renders old moats useless?
It’s not the code. Code is now a commodity. Any competent person with access to an AI coding tool can reproduce the React components, the chart configurations, the table layouts. The100K lines of code I produced could theoretically be replicated in another three weeks by someone else with the same tools.
In the crowded space of market terminals, where the floodwaters have rendered even traditional data moats vulnerable, I believe the only way to overcome the hard habits to break is to Know Your Customer well enough to offer a seamless User Experience.
A Great GUI = A Happy Customer
In 1992 when I was a junior programmer at J. Aron, I learned right away that most of the programming team didn’t understand the underlying product (Currency Options) and therefore could not anticipate their customers’ (the sales/traders) needs. The solutions proposed came from a programmer’s viewpoint, but the GUIs made no sense from the user’s perspective.
I took it upon myself to spend time with the sales/trading team to understand their order workflow and to anticipate their needs. I took an interest in understanding the various Currency Option strategies that the trading desk’s hedge fund clients liked to trade. As a result, my proposed solutions wound up getting chosen ahead of the senior programmers’ proposals — it’s also how I got my lucky break to join the trading desk, but that’s another story for another day.
It comes down to Knowing Your Customer. Now that I’ve been that customer for nearly 35 years as a user of many different financial tools and terminals, I think I have a leg up in presenting solutions that people will want — perhaps enough to overcome their hard habits to break.
Consider the simple but maddening example of finding the right ticker for a given commodity or currency. Those of you who have used multiple trading platforms and financial terminals know how frustrating it is, because outside of standard stock symbols, no two terminals use the same tickers all the time, especially when it comes to futures, currencies, international stocks, etc.
My data redundancy project exposed this as a problem for me early on, because I was hot-swapping between data providers with different ticker conventions. I won’t bore you with the technical details, but this prompted me to design a universal ticker registry system — not a visible feature set in and of itself, but an important feature nonetheless designed to make the user experience as smooth and seamless as possible.
GUI design was my focus when I started my career, when traders were my customers. I then became that customer across multiple asset classes (FX, Commodities, Equities, Convertible Bonds, Options, Credit) for over three decades. It’s funny and a little ironic that this confluence of domain experience + coding passion + AI enablement has now led me back to GUI design.
There and back again, indeed!
Lesson:
Knowing My Customer (because I am that customer) and focusing on the User Experience is how I intend to build my moat amidst the AI floodwaters and how I intend to climb out of the Valley of Despair and up the Slope of Enlightenment.
Reaching the Plateau of Sustainability — What Does AI Really Disrupt?
After three weeks, 800+ commits, and 100K lines of code, here’s my honest assessment:
AI disrupts the cost of building software.
The implementation layer — writing React components, building API endpoints, parsing data formats, wiring up state management — has been compressed from a specialized skill into a commodity input. This is genuinely revolutionary. A domain expert who understands their field can now encode that expertise into working software without hiring a development team or learning to code.
AI does not disrupt many of the other elements required to build a business.
Data licensing, regulatory compliance, user behavior change, infrastructure reliability, and product judgment remain stubbornly human problems. If anything, AI makes these problems more visible, because you hit them faster. When it took six months to build a prototype, you had time to research data rights and plan your scaling strategy. When the prototype is done in a week, you’re confronting those issues at a pace that can be overwhelming.
In some cases, AI can worsen product development by introducing feature bloat — because of the rapidly dwindling cost of inundating the user with useless features.
The “last mile” of SaaS is not code. It never was. It’s the business model, the data rights, the user trust, and the operational reliability. AI makes the first 80% trivially easy — which paradoxically makes the last 20% the only thing that matters.
The Uncomfortable Truth
There’s a narrative in financial circles right now that AI will democratize software creation so thoroughly that the traditional SaaS model will collapse. Why pay for Salesforce when you can build your own CRM over a weekend?
Having lived through this experiment intensely over the last month, I think this narrative gets the direction right but the conclusion totally wrong.
It’s beyond the scope of this particular treatise, but this gets to the heart of my contrarian thesis amidst the current credit scare going on from “SaaSpocalypse” doomsayers. It’s one thing to acknowledge that existing Equity multiples for traditional SaaS might compress in the face of encroaching AI floodwaters, but it’s entirely another thing to say that these floodwaters will cause imminent mass defaults.
I am actively fading that thesis, and my own learnings from the trenches bolster that view. Whether I succeed in commercialization or not won’t be due to the AI part of the equation.
AI won’t kill SaaS. It will kill lazy SaaS — the products that charge premium prices for commodity functionality, that survive on switching costs rather than genuine value, that have ten engineers maintaining what one person with AI could build in a month. But then again, these businesses were already doomed. AI might just accelerate the inevitable for these companies.
But the SaaS products that survive will be the ones built by people who understand their domain deeply enough to make the right human judgment calls about the all-important User Experience, not to mention the data redistribution and reliability hurdles.
For the first time in the history of software, the person best equipped to build a financial terminal is not a software engineer — it’s a financial professional who spent decades as the target user, who understands the data, who knows the regulatory landscape, and who can now use AI to translate that expertise into code without an intermediary.
I haven’t written code in 35 years. In the last three weeks, I shipped a production financial terminal that I think might be worthy of commercialization.
But if it works, it won’t be because AI wrote the code. It would be because I knew what code needed to be written.
Final Lesson
I believe that the SaaSpocalypse scenario being priced into financial markets is overdone. While it’s true that AI coding lowers one barrier to entry, it does not lower many other barriers presented by Data Moats, Workflow Inertia (Hard Habits To Break), and User Experience. Human judgment is still the most important factor in my opinion.
Guided Tour of UrbanKaoberg
Without further ado, let me give you a quick guided tour of UrbanKaoberg.com!
My GUI rests upon Asset Class “Level 1 Tabs” at the top row, punctuated by a Special Situations (the running bull tab) and a Portfolio tab on the right. Upper right corner: Alerts, Light/Dark toggle, Settings, Billing. There are Tutorial buttons throughout the app, in case things are not self-explanatory.
The Macro/Dashboard is very useful for monitoring the latest macro data releases. You can click on the arrows to load the details:
Within each Level 1 Tab, a myriad of Level 2 modules open up. Pictured above is the Macro/Dashboard module. Pictured below is the Macro/Regime module.
Almost all Asset Class Level 1 Tab begin with a CHRT Module that comes with customizable Watch Lists, Charts, and Watch List News (right panel) as well as Asset Class News (bottom panel). Charts also have a full suite of Overlays and Oscillators. I find the Insiders Overlay to be very useful:
Every CHRT module features a Multi-Chart mode that lets you look at relative performance within a group:
In some Level 2 modules like ANLZ (for Analyze), there are even Level 3 modules that allow you to drill down further:
SUMRY: Company Summary
COMPS: Fully customizable Comparables Lists
CPSTR: Capital Structure
DCF: Discounted Cash Flow Valuation Model
FSTMT: Financial Statement Analysis
FILNG: EDGAR Filings
EARNS: Earnings Transcript Analysis
SEGMT: Segment Breakdowns
MOATS: Michael Porter’s “Five Forces” Competitive Analysis
INSDR: Insider Transactions
HLDGS: Institutional Holdings
SCORE: Investor Scores
OPTNS: Black-Scholes Valuation
The FSTMT Module lets you quickly assess Earnings, Balance Sheet, Cash Flow Trends and CAGRS:
The DCF Module allows you to fundamentally value companies and even Goal Seek certain drivers like Revenue CAGR to what the current market value implies:
Rates/YCRV allows you to compare Historical Yield Curves:
You can also quickly compare Global Yield Curves at a glance:
Given all the focus on the Iran War and its impact on various Commodities markets, you might also find Commodities/FCRV (Forward Curves) interesting to see the levels of backwardation or contango:
Or you might want to drill down into Energy Inventories:
The Special Situations Tab launches into specialized apps for idiosyncratic names that I have written about. My debut dashboard covers the Special Sit I wrote about here:
Finally, the Portfolio tab allows you to import portfolios from your brokers to analyze and benchmark performance:
I am constantly enhancing and improving the GUI, and I take your feedback seriously.
Give it a whirl during the free open beta, and let me know what you think!
https://urbankaoberg.com/























On the subject of vibe coding reliable backends, may I suggest looking into (my employer: I'm trying to keep this from being too much of a plug from a long-time subscriber) Akka and its support for spec driven development on top of a runtime with ~20 years of history of reliably managing distributed state.
(Of course, the line between a sufficiently exhaustive spec and code is rather blurry; offloading responsibility for running defined business behavior (logic is one thing, it's taking action that's important) in a way that reliably distributes to the runtime dramatically reduces how exhaustive the spec needs to be)
" It would be because I knew what code needed to be written. " - that's the most important sentence. Most people don't realize that AI is just a tool and take it for a creative, experienced genius that can do it all. And it can't without the right human behind it.