hardware26

joined 1 year ago
[–] hardware26@discuss.tchncs.de 6 points 10 months ago* (last edited 10 months ago) (4 children)

I used Atmel8051 in college. It fits nicely on a breadboard and teaches you how to use assembly and make wonders with 512 byte (yes byte) RAM if I remember the number correctly. I think half of that RAM was even reserved.

[–] hardware26@discuss.tchncs.de 5 points 10 months ago

To be fair 10^(0.000000000000000000001x) is also exponential growth. And if status quo is x=0 and removing entire management means x=10 this means even the max we can get is very little improvement. It can be "exponential" and still not so much.

[–] hardware26@discuss.tchncs.de 21 points 10 months ago (2 children)

"Exponentially" is not synonymous to "a lot". Exponent is a mathematical term and exponential growth requires at least two variables exponentially related to each other. For this to be possibly exponential growth a) progress should be quantifiable (removing management and treating workers well should be quantized somehow) b) performance should be quantifiable and measured at a bunch of progress points (if you have only two measurements it can as well be linear) c) performance should be or can be modeled as a an exponential function of progress in removing management and treating workers well.

[–] hardware26@discuss.tchncs.de 5 points 10 months ago (1 children)

I wish we had an active aoe2 community.

[–] hardware26@discuss.tchncs.de 26 points 10 months ago

I don't think this will work well and others already explained why, but thanks for using this community to pitch your idea. We should have more of these discussions here rather than CEO news and tech gossip.

[–] hardware26@discuss.tchncs.de 1 points 10 months ago

I guess it can happen if you start moving the bottle forward after you start pouring the water.

[–] hardware26@discuss.tchncs.de 4 points 10 months ago

We should stop calling these titles confusing and call them what they are, plain wrong. This is the title of the original article. People who cannot write grammatically correct titles are writing entire articles.

[–] hardware26@discuss.tchncs.de 3 points 11 months ago

Depending on the power consumption, you may consider not using thermal relief while connecting thermal vias for the chip (component 57) to ground layers. But this may make soldering harder so do it only if needed. Thermal vias are so close that they form 3 long dents in 3v3 plane. It is good practice to put vias a little far apart so that planes can go through between vias. This can be important since sometimes lowest impedance can be obtained when current is flowing between those vias. If you don't need to fit 15 vias there, you may consider reducing the number and separating them a bit. You can also check the design rules for minimum copper width and minimum via clearance for your manufacturer and enter them in your CAD tool.

[–] hardware26@discuss.tchncs.de 13 points 11 months ago (1 children)

I don't realistically expect such ban to happen. I started banning everyone who posts about Musk instead, my feed got a lot cleaner.

[–] hardware26@discuss.tchncs.de 6 points 11 months ago

Third siblings who were born right after cameras got affordable have the most pictures.

[–] hardware26@discuss.tchncs.de 6 points 11 months ago

Not every programmer has a github account.

[–] hardware26@discuss.tchncs.de 3 points 11 months ago

"City-size" at least they didn't measure in football fields.

 

cross-posted from: https://discuss.tchncs.de/post/3011500

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

 

cross-posted from: https://discuss.tchncs.de/post/3011500

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

 

In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreliable performance of these AI text-detection programs could adversely affect many individuals, including students and job applicants.

 

cross-posted from: https://discuss.tchncs.de/post/2739005

https://semiengineering.com/challenges-in-ramping-new-manufacturing-processes/

Despite a slowdown for Moore’s Law, there are more new manufacturing processes are rolling out faster than ever before. The challenge now is to decrease time to yield, which involves everything from TCAD and design technology co-optimization, to refinement of power, performance, area/cost, and process control and analytics. Srinivas Raghvendra, vice president of engineering at Synopsys, talks about the various steps involved in determining what can be printed on a wafer, how to reduce defect density, and what other concerns need to be addressed to ramp a new process.

 

The genesis of this upheaval is inextricably tied to the smart phone revolution. “It was when people realized what the phone could do for their life,” Curran said. “That led people to ask why their car was not able to know them and understand what they want. ‘Why do I have all these buttons? Why isn’t it upgradable like the phone is upgradable?’ Then, when Tesla came out and started the whole vehicle based on the software, people realized this is the way of the future.

...

For automotive OEMs to adopt new architectures requires a fundamental shift in how they approach their supply chain. Modules cannot be developed individually by multiple Tier 1 and Tier 2 suppliers. Instead, they need to be developed in sync, with an understanding of how each is characterized and how they can be fully integrated. “You can’t have Bosch do one module, Continental do another module, Aptiv do a separate module, then plug them in on the assembly line and think the experience is going to be great,” she said.

...

“The OEMs are saying, ‘If I’m going to go to 3nm, which is $75 million to $100 million for a mask set, plus a huge development team, where am I going to get those people? That’s not the biggest pool of talent in the world,” said Fritz. “‘How do I do that?’ Chiplets. So now they’re saying, ‘I can have these companies, maybe even startups, developing a chiplet.’ It’s more cost-effective for them, because those chiplets can be sold to many customers and across multiple market segments and get the volume up.

...

“An application like Apple CarPlay is different from other components in a vehicle, where others are trying to collaborate as OEMs pull it together,” said Simon Rance, director, product management, data & IP management at Keysight EDA. “The user experience plays a big role in the outcome of that design and application. That’s where there needs to be tighter collaboration between those OEMs that are involved in that system, not just Apple with the CarPlay app and its capabilities and functions. How does it interface with Bluetooth? How does it interface with sensors and sensor data, for example? These are where vendors are looking to take these capabilities, or solutions like CarPlay, to the next level.

...

"We’ve had these traditional collaborations in automotive where we get OEM cross-synchronization,” Lapides said. “Traditionally that’s been around AUTOSAR, maybe around embedded Linux, and certainly around overall SoC design. But now we’re seeing much more collaboration in the software area, outside of AUTOSAR, outside of the OS. We’re seeing more collaboration getting down to the processor side and what the processor can do. Those things are really interesting — especially in automotive, where AI is going out to the edge with sensors.

 

cross-posted from: https://discuss.tchncs.de/post/2554454

The digital RAKs provide Arm Neoverse V2 designers with several key benefits. For example, the Cadence Cerebrus AI capabilities automate and scale digital chip design, delivering better PPA and improving designer productivity. Cadence iSpatial technology provides an integrated and predictable implementation flow for the faster design closure. The RAKs also include a smart hierarchy flow that delivers optimal turnaround times on large, high-performance CPUs. The Tempus ECO technology offers signoff-accurate final design closure based on path-based analysis. Finally, the RAKs incorporate the GigaOpt activity-aware power optimization engine to significantly reduce dynamic power consumption.

 

cross-posted from: https://discuss.tchncs.de/post/2444019

I have electronics and digital design/verification background (MSc and some industry experience). As in the title, I am interested in learning and lately I got particularly interested in formal verification and started reading books, watching tutorials, on top of applying it at work. I really would like to learn more, participate to its advancement and contribute even slightest. I also enjoy academic environment. This is why I am considering a PhD. However leaving my job for full-time PhD means significant paycut even if I get into a funded PhD, also I am here on visa and many programs require you to pay the difference between foreign student price and domestic student price out of your packet, after receiving the funding. So leaving my job is likely not an option. I thought about doing a PhD part-time on top of my job. It will be very time and energy consuming, but I think I can take that. My bigger concern is, part-time PhD will take long time (6-8 years) and field is ever-changing, I am afraid my thesis may become irrelevant by the time I finish it. Also what I hear is that, if you do it part-time, you will not get the best subjects since professors would like to provide better supervision to and quick return from a full-time student. So I am hesitant about a PhD, even though it was something I was thinking of since a very young age. What do you think about a PhD, do you have any advice, some opportunity or downside which I did not consider? And if not with a PhD, how do I learn and research more? Reading and taking online courses are always options, but the problem is without any supervision, clear goal and guidance, I am sure I will get sidetracked and it may not be very fruitful.

 

I have electronics and digital design/verification background (MSc and some industry experience). As in the title, I am interested in learning and lately I got particularly interested in formal verification and started reading books, watching tutorials, on top of applying it at work. I really would like to learn more, participate to its advancement and contribute even slightest. I also enjoy academic environment. This is why I am considering a PhD. However leaving my job for full-time PhD means significant paycut even if I get into a funded PhD, also I am here on visa and many programs require you to pay the difference between foreign student price and domestic student price out of your packet, after receiving the funding. So leaving my job is likely not an option. I thought about doing a PhD part-time on top of my job. It will be very time and energy consuming, but I think I can take that. My bigger concern is, part-time PhD will take long time (6-8 years) and field is ever-changing, I am afraid my thesis may become irrelevant by the time I finish it. Also what I hear is that, if you do it part-time, you will not get the best subjects since professors would like to provide better supervision to and quick return from a full-time student. So I am hesitant about a PhD, even though it was something I was thinking of since a very young age. What do you think about a PhD, do you have any advice, some opportunity or downside which I did not consider? And if not with a PhD, how do I learn and research more? Reading and taking online courses are always options, but the problem is without any supervision, clear goal and guidance, I am sure I will get sidetracked and it may not be very fruitful.

 

!phd@lemm.ee

 

I see news from April that it is decided to not have fireworks at the end of the festival this year. Is there any update to that? Aren't we really gonna have it?

 

cross-posted from: https://discuss.tchncs.de/post/2357238

Are you an engineer working on designing complex modern chips or System On Chips (SOCs) at the Register Transfer Level (RTL)? Have you ever been in one of the following frustrating situations?

•Your RTL designs suffered a major (and expensive) bug escape due to insufficient coverage of corner cases during simulation testing.

• You created a new RTL module and want to see its real flows in simulation, but realize this will take another few weeks of testbench development work.

• You tweaked a piece of RTL to aid synthesis or timing and need to spend weeks simulating to make sure you did not actually change its functionality.

• You are in the late stages of validating a design, and the continuing stream of new bugs makes it clear that your randomized simulations are just not providing proper coverage.

• You modified the control register specification for your design and need to spend lots of time simulating to make sure your changes to the RTL correctly implement these registers.

If so, congratulations: you have picked up the right book! Each of these situations can be addressed using formal verification (FV) to significantly increase both your overall productivity and your confidence in your results. You will achieve this by using formal mathematical tools to create orders-of-magnitude increases in efficiency and productivity, as well as introducing mathematical near-certainty into areas previously dependent on informal testing.

Design verification has always been essential to chip design. However as chip complexity increased over years, state-space and required verification effort exponentially exploded. With emerging powerful and commercially accessible tools, formal verification has become more viable and even unavoidable for reliable sign-off and catching bugs early in the process. I found this book a very helpful introduction to formal verification. It explains how formal can be utilized, different methods like formal property verification (FPV) and sequential equivalence checks (SEC) and where they are useful, limitations, complexity problems and how to mitigate the issues that come with formal. It explains how formal and functional can complement each other for combined sigh-off. It explains theoretical concepts with clear examples and diagrams. It explains formal algorithms as well for anyone interested, but focus is more about how to utilize formal in your projects. And if you are a total beginner, do not worry, there is section which explains essentials of Systemverilog Assertions (SVA), which you can completely skip if you know about it already.

view more: ‹ prev next ›