Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
view the rest of the comments
And all this with more than 25% TDP increase. That's insane.
Unfortunately the death of Moore's law means they need to get more speed from somewhere to have new products to sell so I expect this'll continue for high end products.
Otoh, at least low end products get dragged upwards too, amazing how much cpu you can get in 32w
Applications have been growing more and more thread-aware so we're likely going to see core counts continue to increase for some time.
They might focus more on branch prediction and reducing the penalty for mispredicts, which won't be as impressive as raw clock speed or IPC but could significantly improve performance on real world workloads. Maybe some form of deep learning or statistical analysis, or even JIT compiling commonly called routines directly to microcode to skip instruction decoding.
With enough cores, low-end products might end up seeing the iGPU ditched in favor of a return of software rendering to make more room on the die.
We'll probably see more instruction set extensions that accelerate AI workloads or other commonly used algorithms, like what already exists for SHA-2 and AES.