I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.
They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.
Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?
Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced ~~grifters~~ forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.