Skip to main content

3 posts tagged with "Methodology"

Evaluation methods, scoring rubrics, and research process

View All Tags

HUD FMR and Income Limits with AI: Housing Research Needs Source Data

· 6 min read
MCPBundles

Housing research questions are easy to ask and easy to answer badly.

"Is this county affordable?" "What does HUD say about rent here?" "Which income limit should I use?" "How much cost burden shows up in CHAS?"

An agent can only answer those questions well if it works from source data: HUD Fair Market Rents, standard income limits, MTSP income limits, geography identifiers, and CHAS affordability tables. A language model alone will blur those concepts together.

We rebuilt the HUD Housing Data MCP server around that evidence path, so agents can retrieve the HUD rows first and then explain what they mean.

AI housing research dashboard showing HUD Fair Market Rent, income limits, and CHAS affordability cards

UK Property Data with AI: Valuations Need Evidence, Not Guesswork

· 6 min read
MCPBundles

Most AI valuation demos make the same mistake. A user types an address, the model returns a number, and everyone pretends the answer came from evidence.

That is backwards. UK property questions are only useful when the agent can show its working: nearby sold prices, EPC floor area, property type, transfer dates, postcode geography, match confidence, and the gaps where public data is thin.

We built the UK Property Intelligence MCP server around that evidence loop. The goal is not to manufacture certainty. The goal is to turn public house price data, EPC records, and postcode context into a bounded report an analyst can inspect.

UK property valuation evidence dashboard with sold-price cards, EPC rating tiles, postcode map, and an AI agent confidence indicator

How We Score MCP Server Security: 18 Rules, Two Published Taxonomies, Zero Invented Checks

· 8 min read
MCPBundles

You paste an MCP server URL into a security analyzer. It spits out a number. You ask the obvious question: what does that number actually mean?

Most MCP scanners can't answer it. They run a bunch of regex, run a bunch of LLM prompts, and produce a verdict. If you push on the verdict, you find ad-hoc heuristics with no published source — and worse, you find marketing claims about "AI-powered security analysis" that nobody can audit.

We built MCPBundles' analyzer the other way around. Every rule cites a published taxonomy entry. If we can't cite an entry, the rule doesn't ship. The catalog is small, deliberate, and live: www.mcpbundles.com/learn/mcp-security.

This post is the "show your work" version of that page.