This is the fundamental FUD aspect against the AI act. A startup, which has no impact in people’s lives, cannot be an high risk application by definition. I’ve yet to see a realistic harmless startup example that would fall under the act, if you have one it would change my view on the matter.
Comments
We’re still at 0 use cases, even hypothetical, though.
Do you have a line ? How many people can a startup destroy ? 1 ? 10 ? 1000 ? 6000000 ?