If an AI product touches health, legal, workplace, or other sensitive user reality, the risk is usually not the model alone. It is the surrounding product surface: logging, retention, prompts, exports, fallback states, and claims the team cannot actually defend.
Trust review for AI products that handle sensitive user data and need clearer boundaries, narrower claims, and inspectable proof before launch or sales exposure.
That is enough for a first pass. I will tell you whether this looks like a teardown, a full review, or not a fit.
I reply with fit, the smallest useful starting point, and whether this looks like a teardown, full review, or fix sprint.