hardware26

joined 1 year ago
[–] [email protected] 3 points 1 month ago

DC motors have high inductance, meaning that the current going over it will resist to change. When you turn off a pair of nmos, current will likely start flowing over the the other pair, from source to drain. Depending on the spec of your nmos, you may consider using diodes in parallel to nmos to carry this current. Obviously these diodes should be reverse biased during normal operation.

[–] [email protected] 0 points 2 months ago

Not answering the question but if you studied in English outside of the UK it may be enough. But you need to certify that you studied in English through https://www.ecctis.com/visasandnationality . That is how I did it, and I think it is easier and can be quicker.

[–] [email protected] 8 points 7 months ago

In my first ever programming class textbook was using Allman. Probably for this reason, it is easy for a beginner to match braces. It is a lot loss common industry to my knowledge.

[–] [email protected] 1 points 9 months ago

As you said before power on capacitor is discharged. Right after power on capacitor is still discharged, so voltage on capacitor is zero, so reset pin has Vcc. With time capacitor gets charges and voltage across capacitor increases and reset voltage becomes closer and closer to ground, until it is ground. But it is important to consider what happens at power down too. At power down capacitor is charged. If power source becomes high impedance at power down, then reset pin will probably go down to zero in time but may take a bit time depending on what source exactly does. But if power source is connected to zero at power down reset pin will observe minus vcc and slowly go up to 0. If reset pin is sensitive it may be a good idea to protect it with a diode.

[–] [email protected] 0 points 11 months ago (1 children)

But have you tried restarting your computer and reinstalling all drivers?

[–] [email protected] 1 points 11 months ago

In this article RTL refers to register transfer level. It is a way of describing hardware on very low level, it uses registers for memory (which usually translates to flip-flops when/if synthesized), wires, basic arithmetic and logic operations, but terminology may slightly change based on which rtl language is being used. It can be used to design a CPU, or any ASIC (application specific integrated circuit) chip. Instructions may resemble to processor instructions, but the end result is fundamentally different. You may run a set of instructions on a processor, while what rtl describes is often synthesized and becomes the hardware itself which performs the operations (e.g. arithmetic logic unit in the cpu).

 

cross-posted from: https://discuss.tchncs.de/post/3979328

Engineers in Princeton managed to train GPT4 and extend AutoSVA to generate SVA (systemverilog assertions) from buggy RTL and functionality description. SVA is widely used to verify digital design for ASIC and FPGAs. AutoSVA2, which extends open-source AutoSVA, improves the flow to generate SVA from English description. LLM was trained in multiple iterations to generate SVA with correct syntax, which is something GPT fails to do by itself. Authors argue that GPT's "creativity" allows it to write correct assertion even from a buggy RTL. Later authors used this tool to write RTL from scratch as well. RTL written by GPT was tested against the SVA generated by this tool, and SVA corrected by an engineer was fed back to LLM, which generated functionally correct FIFO queue in a few iterations.

Abstract—Formal property verification (FPV) has existed for decades and has been shown to be effective at finding intricate RTL bugs. However, formal properties, such as those written as SystemVerilog Assertions (SVA), are time-consuming and error- prone to write, even for experienced users. Prior work has attempted to lighten this burden by raising the abstraction level so that SVA is generated from high-level specifications. However, this does not eliminate the manual effort of reasoning and writing about the detailed hardware behavior. Motivated by the increased need for FPV in the era of heterogeneous hardware and the advances in large language models (LLMs), we set out to explore whether LLMs can capture RTL behavior and generate correct SVA properties. First, we design an FPV-based evaluation framework that measures the correctness and completeness of SVA. Then, we evaluate GPT4 iteratively to craft the set of syntax and semantic rules needed to prompt it toward creating better SVA. We extend the open-source AutoSVA framework by integrating our improved GPT4-based flow to generate safety properties, in addition to facilitating their existing flow for liveness properties. Lastly, our use cases evaluate (1) the FPV coverage of GPT4-generated SVA on complex open-source RTL and (2) using generated SVA to prompt GPT4 to create RTL from scratch. Through these experiments, we find that GPT4 can generate correct SVA even for flawed RTL—without mirroring design errors. Particularly, it generated SVA that exposed a bug in the RISC-V CVA6 core that eluded the prior work’s evaluation.

 

As solder bump pitches shrink, several issues arise. Reduced bump height and surface area for bonding make it increasingly difficult to establish reliable electrical connections, necessitating precise manufacturing processes to avoid errors. Critical co-planarity and surface roughness become paramount, as even minor irregularities can compromise successful bonding.

To overcome these issues, Cu-Cu hybrid bonding technology steps in as a game-changer. This innovative technique involves embedding metal contacts between dielectric materials and using heat treatment for solid-state diffusion of copper atoms, thereby eliminating the bridging problem associated with soldering.

The advantages of hybrid bonding over flip-chip soldering are obvious. Firstly, it enables ultra-fine pitch and small contact sizes, facilitating high I/O counts. This is critical in modern semiconductor packaging, where devices require a growing number of connections to meet performance demands. Secondly, unlike flip-chip soldering, which often relies on underfill materials, Cu-Cu hybrid bonding eliminates the need for underfill, reducing parasitic capacitance, resistance and inductance, as well as thermal resistance. Lastly, the reduced thickness of the bonded connections in Cu-Cu hybrid bonding, nearly eliminating the 10 to 30 micron thickness of solder balls in flip-chip technology, opens up new possibilities for more compact and efficient semiconductor packages.

 

Although you are probably not aware of them, dozens of electronic control units (ECUs) — printed circuit boards (PCBs) in metal or plastic housings — exist in your car to control and monitor the operation and safety of your vehicle’s many control systems. These units must work for the lifetime of your car, during which time they are subjected to many heating and cooling cycles. The most obvious cycle occurs when you start your car after it has cooled at night. It heats up as the car runs and then cools again when you shut it off. That’s one “ambient” temperature cycle.

Additional so called “active” thermal cycles can occur locally within specific electronic components on the PCB. For instance, a MOSFET transistor draws a lot of current and heats up the PCB near its location, causing additional thermal cycling. These complex temperature distributions can cause local thermomechanical strain because differences in temperature across the PCB result in differential expansion of the board. Because the board is constrained by its housing, this can lead to bending of the board, putting additional strain on the solder joints that connect the components to the board.

The widely used power law based approach — simulation of only few cycles and prognosis of solder joints lifetime — has many shortcomings, where no absolute lifetime prediction or the damage driven load relocation and its nonlinear evolution are captured. Youssef Maniar and Marta Kuczynska, engineers at Robert Bosch GmbH in Germany, have developed an accurate nonlinear damage model able to predict absolute lifetime of solder connections. The problem they faced, absolute lifetime prediction, involves simulation of all cycles imposed to the components, and the computational effort is therefore extensive. Then, about two years ago, they read an academic paper that described a way to “jump” over some cycles to accelerate simulation.

The mathematics behind the ability to jump over a large number of simulated thermomechanical cycles to dramatically accelerate the simulation time without sacrificing accuracy is involved, but the software essentially looks at the slope or “gradient” of certain solution variables (e.g., stress) versus time plot on the fly to determine when it can skip over the next n number of cycles. The maximum value of n must be defined by the simulation engineer before the run. The simulation engineer also inputs other parameters beforehand to impose limits on the software to optimize the run.

 

cross-posted from: https://discuss.tchncs.de/post/3157319

Compared with traditional monolithic devices, the design and manufacturing process for chiplets is significantly different. The scrap costs associated with manufacturing traditional monolithic semiconductor devices is basically linear, including single chip cost, packaging, and assembly costs.

Manufacturing processes for 2.5D/3D designs differ significantly in terms of the accumulation of scrap costs. Specifically, these costs increase geometrically from fabrication to assembly driven by scrap costs for multiple dies, multi-chip partial assemblies, and/or full 2.5D/3D packages.

Shifting tests, either left or right, in the test process is a strategy to achieve these goals and minimize the overall manufacturing cost of 2.5D/3D components. Shift left is the ability to increase test coverage earlier in the manufacturing process (e.g., during wafer inspection and partial packaging) to maximize KGD, while reducing future packaging costs. Additional tests can also be added to the process to identify new failure types or failure modes.

However, the benefits of shift left need to be weighed. For example, increasing test intensity early in the manufacturing process can positively impact known good devices but it can also lead to an increase in test costs that is not sufficiently offset by the optimizations, even after accounting for the resulting reduction in scrap costs.

Shift right means increasing test coverage later in the manufacturing process, expanding the ability to detect defects, and maintaining quality levels with the goal of reducing costs with higher parallelism testing.

Typically, a test item with a higher yield on wafer or mission pattern tests, or a high yield test that requires a longer scan test time is an ideal candidate for shifting right. These tests can be moved to final or system level test, or flexibly managed in between.

The goal of shifting tests to the left or right is to achieve the optimal combination of quality and yield throughout the entire manufacturing process, ultimately optimizing the overall cost of quality.

 

cross-posted from: https://discuss.tchncs.de/post/3011500

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

 

cross-posted from: https://discuss.tchncs.de/post/3011500

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

[–] [email protected] 0 points 1 year ago (6 children)

This is the first time I hear "black barbershop". Is it what I think it is, why is such separation needed?

 

cross-posted from: https://discuss.tchncs.de/post/2357238

Are you an engineer working on designing complex modern chips or System On Chips (SOCs) at the Register Transfer Level (RTL)? Have you ever been in one of the following frustrating situations?

•Your RTL designs suffered a major (and expensive) bug escape due to insufficient coverage of corner cases during simulation testing.

• You created a new RTL module and want to see its real flows in simulation, but realize this will take another few weeks of testbench development work.

• You tweaked a piece of RTL to aid synthesis or timing and need to spend weeks simulating to make sure you did not actually change its functionality.

• You are in the late stages of validating a design, and the continuing stream of new bugs makes it clear that your randomized simulations are just not providing proper coverage.

• You modified the control register specification for your design and need to spend lots of time simulating to make sure your changes to the RTL correctly implement these registers.

If so, congratulations: you have picked up the right book! Each of these situations can be addressed using formal verification (FV) to significantly increase both your overall productivity and your confidence in your results. You will achieve this by using formal mathematical tools to create orders-of-magnitude increases in efficiency and productivity, as well as introducing mathematical near-certainty into areas previously dependent on informal testing.

Design verification has always been essential to chip design. However as chip complexity increased over years, state-space and required verification effort exponentially exploded. With emerging powerful and commercially accessible tools, formal verification has become more viable and even unavoidable for reliable sign-off and catching bugs early in the process. I found this book a very helpful introduction to formal verification. It explains how formal can be utilized, different methods like formal property verification (FPV) and sequential equivalence checks (SEC) and where they are useful, limitations, complexity problems and how to mitigate the issues that come with formal. It explains how formal and functional can complement each other for combined sigh-off. It explains theoretical concepts with clear examples and diagrams. It explains formal algorithms as well for anyone interested, but focus is more about how to utilize formal in your projects. And if you are a total beginner, do not worry, there is section which explains essentials of Systemverilog Assertions (SVA), which you can completely skip if you know about it already.