Skip to content
Go back

I Used Claude Code to Build a Credit Scoring Model (Here's What I Learned)

Edit page

Kalau kerja di credit risk atau ML, pasti tau rasanya: iteration cycle yang lama banget. Mau test satu ide baru? Prepare data, tulis code, train model, evaluate, repeat. Bisa makan waktu berjam-jam atau bahkan berhari-hari buat satu eksperimen. I got tired of it. So I tried something different.

I wanted to see if AI coding tools could actually speed up the ML workflow - not just write boilerplate, but help me iterate faster on real credit scoring models. So I fired up Claude Code and paired it with AutoGluon (Amazon’s AutoML library) to see what would happen.

The task: build a credit scoring model from a dataset, compare different algorithms, get clean metrics. The kind of thing I’d normally spend half a day on.

Honestly? It was faster than I expected. Way faster. Claude Code handled the tedious parts - data preprocessing, setting up the AutoGluon training pipeline, generating evaluation metrics, creating comparison charts. The stuff that usually eats up time not because it’s hard, but because it’s repetitive.

Within maybe an hour, I had multiple models trained, AUC scores compared, and a decent baseline ready. That same workflow would’ve taken me most of a day doing it manually.

And this is important: the AI didn’t make me smarter about credit risk. It just made me faster. I still had to know which features actually matter for creditworthiness. I still had to think about whether the model would be fair to borrowers with thin credit files. I still had to interpret the results and decide if they made business sense.

Claude Code bisa nulis code-nya, tapi dia nggak tau konteks bisnis-nya. Dia nggak tau kalau ada fitur yang kelihatan prediktif tapi sebenarnya proxy untuk something you shouldn’t discriminate on.

That’s the part that still needs a human. And honestly, that’s the hard part anyway.

Why I Think This Matters

In credit risk, faster iteration isn’t just about efficiency - it’s about being able to test more ideas. Can this alternative data source help us score borrowers who don’t have traditional credit history? Let’s find out in an hour instead of a week.

For fintech companies trying to serve underbanked populations, that speed matters. The faster you can experiment, the faster you can find models that actually work for people who’ve been ignored by traditional banks.

My Honest Take

AI coding assistants are useful. They’re not magic. They accelerate the parts of ML work that were already straightforward (but time-consuming), and they don’t touch the parts that were actually hard.

If you know what you’re doing, tools like Claude Code can make you significantly more productive. If you don’t know what you’re doing, you’ll just produce bad models faster.

Jadi ya, worth exploring. Tapi jangan expect it to replace the thinking part. That’s still on you.


Edit page
Share this post on:

Previous Post
I Built an AI Voice Agent for Collections (In a Weekend)
Next Post
Building an AI Debt Collection Chatbot From Scratch: A Weekend Project That Actually Works