If someone is consistently falling for phishing emails (real, or from the IT department), shouldn't that person eventually be fired? Isn't that a punishment?
If there is neither a punishment nor a reward, what is the incentive to learn? Some people may not need one. Many others do.
I agree that a single failure resulting in the loss of significant income might be harsh, but I think there needs to be a way to convince people to take the issue seriously, and a punishment of some kind is therefore always warranted (e.g. eventual firing).
You can balance out the issue by creating a reward system as well, e.g. if you report all of the test emails sent to you in a year (i.e. not just ignore them), your bonus is increased by X% or something. Similarly, if you report an actual phishing email, your bonus is increased by some percent, even if you initially fell for it. I think it is possible to foster a conscientious and honest culture, with a system that includes punishments.
I think this would be too limiting for humans, and not effective for bots.
As a human, unless you know the person in real life, what's the incentive to approve them, if there's a chance you could be banned for their bad behavior?
As a bot creator, you can still achieve exponential growth - every time you create a new bot, you have a new approver, so you go from 1 -> 2 -> 4 -> 8. Even if, on average, you had to wait a week between approvals, in 25 weeks (less that half a year), you could have over 33 million accounts. Even if you play it safe, and don't generate/approve the maximal accounts every week, you'd still have hundreds of thousands to millions in a matter of weeks.