Saturday, 25 June 2016

Aspect Ratio of Core/Block/Design



The Aspect Ratio of Core/Block/Design is given as:






The aspect ratios of different core shapes are given in below :






The Role of Aspect Ratio on the Design:


  1. The aspect ratio effects the routing resources available in the design
  2. The aspect ratio effects the congestion
  3. The floorplanning need to be done depend on the aspect ratio
  4. The placement of the standard cells also effect due to aspect ratio
  5. The timing and there by the frequency of the chip also effects due to aspect ratio
  6. The clock tree build on the chip also effect due to aspect ratio
  7. The placement of the IO pads on the IO area also effects due to aspect ratio
  8. The packaging also effects due to the aspect ratio
  9. The placement of the chip on the board also effects
  10. Ultimately every thing depends on the aspect ration of core/block/design

Standard Cell Rows



The area allotted for standard cells on the core is known as standard cell area. This area is divided into the rows known as standard cell rows as shown in the below figure





The height of this row is equal to the height of the standard cell. In digital designs most of the cases, the height of standard cells are constant and width varies. There may be double height cells , triple height cell, etc. Similarly the rows also have heights accordingly. The standard cells will sit in the row with proper orientation.
The rows may abut or may not. The abutted rows share the power connections

Thursday, 23 June 2016

Min Pulse Width Violation



Min pulse width check is to ensure that pulse width of clock signal is more than required value.


Basically it is based on frequency of operation and Technology. Means if frequency of design is 1Ghz than typical value of each high and low pulse width will be equal to (1ns/2) 0.5ns if duty cycle is 50%.


Normally we see that in most of design duty cycle always keep 50% otherwise designer can face issues like clock distortion and if in our design using half cycle path means data launch at +ve edge and capturing at -ve edge and again min pulse width as rise level and fall level will not be same and if lots of buffer and inverter will be in chain than it is possible that pulse can be completely vanish.


Also we have to consider the best and worst case when clock get routed and depend on that decide that what should be the required value of Min Pulse Width.


Now we know that rise delay and fall delay of combinational cells are not equal so if a clock entering in a buffer than the output of clock pulse width will be separate to input.
So for example, if buffer rise delay is more than fall delay than output of clock pulse width for high level will be less than input.





so,
High pulse : 0.5-0.056+ 0.049 = 0.493 &
Low pulse : 0.5-0.049+0.056 = 0.507


For better understanding we go with real time scenario for Min Pulse Width.


Normally for clock path we use clock buffer because of the equal rise delay and fall delay of these buffer compare to normal buffer but this delay is not exact equal thatswhy we have to check min pulse width.


We can understand it with an example :-


Lets there is a clock signal which is going to clock pin of flop through series of buffers with different rise and fall delay. we can calculate that how it effect to high or low pulse of clock.

we can understand through calculation:-


High pulse width = 0.5 + (0.049 - 0.056) + (0.034 – 0.039) + (0.023 – 0.026) + (0.042 – 0.046) + (0.061 – 0.061) + (0.051 – 0.054) = 0.478ns

Low Pulse width = 0.5 + (0.056 – 0.049) + (0.038 – 0.034) + (0.026 – 0.023) + (0.046 – 0.042) + (0.061 – 0.061) + (0.054 – 0.051) = 0.522ns



Lets required value of Min pulse width is 0.420ns.

Uncertainty = 80ps
than high pulse width = 0.478-0.080 = 0.398ns
Now we can see that we are getting violation for high pulse as total high pulse width is less than Require value.

Solution

So for solving this violation we can add an inverter which will change the transition and improve it.
as for the the first inverter high pulse will be more then low &  in the next  vice versa.

Wednesday, 27 April 2016

Crosstalk Questions

What is cross talk?


Switching of the signal in one net can interfere neigbouring net due to cross coupling capacitance.This affect is known as cros talk. Cross talk may lead setup or hold voilation.



detail click here








How can you avoid crosstalk?


-Double spacing=>more spacing=>less capacitance=>less cross talk

-Multiple vias=>less resistance=>less RC delay

-Shielding=> constant cross coupling capacitance =>known value of crosstalk

-Buffer insertion=>boost the victim strength


How shielding avoids crosstalk problem? What exactly happens there?


-High frequency noise (or glitch)is coupled to VSS (or VDD) since shilded layers are connected to either VDD or VSS.

Coupling capacitance remains constant with VDD or VSS.




How spacing helps in reducing crosstalk noise?


width is more=>more spacing between two conductors=>cross coupling capacitance is less=>less cross talk

Why double spacing and multiple vias are used related to clock?


Why clock?-- because it is the one signal which chages it state regularly and more compared to any other signal. If any other signal switches fast then also we can use double space.

Double spacing=>width is more=>capacitance is less=>less cross talk

Multiple vias=>resistance in parellel=>less resistance=>less RC delay




How buffer can be used in victim to avoid crosstalk?


Buffer increase victims signal strength; buffers break the net length=>victims are more tolerant to coupled signal from aggressor

what is the difference between crosstalk noise and crosstalk delay?
Click here







Friday, 22 April 2016

IR Drop

1. Power


The power spent in Complementary metal oxide semiconductor (CMOS) can be classified as dynamic power consumption and leakage or static power consumption.

2. Leakage power:


is consumed at all times even in ideal states and it is dominating total power equation in advanced technologies. It is unnecessary and need to minimize it.

3. Dynamic power consumption:


is due to the low impedance path between the rails formed through the switching devices. The switching at the output of logic gates can be due to desired functional transitions or due to spurious transitions called glitches. The glitches at the output of logic gates are due to differences in arrival times at various inputs. Glitch power in modern circuits account for 20% to 70% and it is 7% to 43%[1] of the dynamic power consumption.



3.1.Glitch and Dynamic Power


Glitches are the spurious transitions which occur due to difference in arrival times of signals at the gate inputs. These are not needed for the correct functioning of the logic circuit. Power consumed by glitches is called as Glitch power. Every signal net of a gate needs to be transmitted at most once in every clock cycle. But in the real scenario there are output transitions switching more than once in every clock cycle and these unnecessary transitions will also consume power and they contribute significantly to unexpected peak currents which are higher than that of original designs specifications. These peak currents occur in a very short period of time and bring about a large transient voltage or IR drop simultaneously.

The IR drop is a power integrity issue and can impact circuit performance and reliability. So it is very advantageous to eliminate glitches in the circuits as power consumption is critical in today’s chips. The flow of glitch in a digital logic circuit gate is shown in Fig . In a logic gate, the number of edges in the transients at the output of the gate may equal to the number of arriving signals at the gate. The maximum difference in the arrival time of the signals at the inputs of the gate is called as differential path delay. It is also the maximum width of the possible glitch at the circuit output. Consider Fig. 1, in the circuit we can see the unbalanced arrival times of the inputs due to the inverter circuit in the lower input path of the NAND gate











4. How will you do power planning?


Unless Power planning is planned out of the design, power integrity issues like excessive rail voltage drop (IR drop) and ground bounce can create timing problems. In addition, electromigration can lead to chip failures. By using best practices to develop a system-on-a-chip's (SoC's) power structure and analyzing it often throughout the design flow, one can ensure power integrity while preventing a variety of layout difficulties.

Power pads supply power to the chip. Power rings carry power around the periphery of the die, a standard cell's core area, and individual hard macros. Typically, the rings are put in higherlevel routing layers to leave lower layers for signal routing.

  1. · There are two types of power planning and management. They are core cell power management and I/O cell power management.
  2. · In former one VDD and VSS power rings are formed around the core and macro.
  3. · In addition to this straps and trunks are created for macros as per the power requirement.
  4. · In the later one, power rings are formed for I/O cells and trunks are constructed between core power ring and power pads.
  5. · Top to bottom approach is used for the power analysis of flatten design while bottom up approach is suitable for macros.
  6. · The power information can be obtained from the front end design.
  7. · The synthesis tool reports static power information.
  8. · Dynamic power can be calculated using Value Change Dump (VCD) or Switching Activity Interchange Format (SAIF) file in conjunction with RTL description and test bench.
  9. · Exhaustive test coverage is required for efficient calculation of peak power. This methodology is depicted in Figure (1).











5. How can you reduce dynamic power?


-Reduce switching activity by designing good RTL

-Clock gating

-Architectural improvements

-Reduce supply voltage

-Use multiple voltage domains-Multi vdd



Most commonly used methodology to resolve peak transient IR drop is to add the decoupling capacitance (Decap) cells in to layout. These Decap cells acts as local charge reservoirs and reduce the effect of peak IR drop on neighbouring circuits. However Decap cells contribute significant gate tunnelling leakage current to the design and starting from 90nm technologies and below this contribution is even more due to gate oxide scaling.





6. What are the vectors of dynamic power?


I & V

Dynamic voltage (IR) drop, unlike the static voltage drop depends on the switching activity of the design, and hence it is vector dependent. Dynamic IR drop Evaluates the IR drop caused when large amounts of circuitry switch simultaneously.

One of the key requisites is to generate a realistic VCD (Value Change Dump) a file format that captures the switching information which accounts for the real cell and interconnect delays typically done by annotating an SDF (Standard Delay Format) in the gate level simulation.

Such a simulation captures the realistic spread of switching activity in the design for duration of time window (T). During dynamic IR drop analysis T will be break down in to several small time steps.

The length of time step will be determined by the switching activity window or average transition time which can be obtained by the static timing analysis.

Do you know about input vector controlled method of leakage reduction?

Leakage current of a gate is dependant on its inputs also. Hence find the set of inputs which gives least leakage. By applyig this minimum leakage vector to a circuit it is possible to decrease the leakage current of the circuit when it is in the standby mode. This method is known as input vector controlled method of leakage reduction.





7. If you have both IR drop and congestion how will you fix it?


-Spread macros

-Spread standard cells

-Increase strap width

-Increase number of straps

-Use proper blockage

Wednesday, 13 April 2016

Congestion


Congestion needs to be analyzed after placement and the routing results depend on how congested your design is. Routing congestion may be localized. Some of the things that you can do to make sure routing is hassle free are:

Placement blockages: 

The utilization constraint is not a hard rule, and if you want to specifically avoid placement in certain areas, use placement blockages.
Soft blockages (buffer only)
Hard blockages (No std cells and buffers are allowed to Place)
Partial blockages (same as density screens)
Halo (same as soft blockage but blockage can also be moved w.r.t Macro.)


Macro-padding:

 Macro padding or placement halos around the macros are placement blockages around the edge of the macros. This makes sure that no standard cells are placed near the pin outs of the macros, thereby giving extra breathing space for the macro pin connections to standard cells.

Cell padding:


Cell Padding refers to placement clearance applied to std cells in PnR tools. This is typically done to ease placement congestion or reserve some space for future use down the flow.
For example typically people apply cell padding to the buffers/inverters used to build clock tree, so that space is reserved to insert DECAP cells near them after CTS.

Cell padding adds hard constraints to placement. The constraints are honored by cell legalization, CTS, and timing optimization, unless the padding is reset after placement so
those operations can use the reserved space. You can use cell padding to reserve space for routing.

The command "specifyCellPad" is used to specify the cell padding in SOC-Encounter.

This command adds padding on the right side of library cells during placement.

The padding is specified in terms of a factor that is applied to the metal2 pitch. For example, if you specify a factor of 2, the software ensures that there is additional clearance of two times the metal2 pitch on the right side of the specified cells.



Maximum Utilization constraint (density screens): 

Some tools let you specify maximum core utilization numbers for specific regions. If any region has routing congestion, utilization there can be reduced, thus freeing up more area for routing.
each tool is having this setting, check wityh your DA for the detail.

set_congestion_options -max_util .6 -coordinate {10 20 40 40}

Physical Design interview 4


What are the inputs you get for Block level Physical Design?

  1. Netlist (.v /.vhd)
  2. Timing Libraries (.lib/.db)
  3. Library Exchange Format (LEF)
  4. Technology files (.tf/.tech.lef)
  5. Constrains (SDC)
  6. Power Specification File
  7. Clock Tree Constrains
  8. Optimization requirements
  9. IO Ports file
  10. Floorplan file

What are the different checks you do on the Input Netlist.

  1. Floating Pins
  2. Unconstrained pins
  3. Undriven input ports
  4. Unloaded output ports
  5. Pin direction mismatches
  6. Multiple Drivers
  7. Zero wire load Timing checks
  8. Issues with respect to the Library file, Timing Constraints, IOs and Optimization requirements.

How to do macro Placement in a block

  1. Analyse the fly-line for connectivity between Macros to Macros and between the Macros to IO ports.
  2. Group and Place the same hierarchy Macros together.
  3. Calculate/Estimate the Channel length required between Macros.
  4. Avoid odd shapes
  5. Place macros around the block periphery, so that core area will have common logic.
  6. Keep enough room around Macros for IO routing.
  7. Give necessary blockages around the Macros like Halo around the macros.

What are the issues you see if floorplan is bad.

  1. Congestion near Macro corners due to insufficient placement blockage.
  2. Standard cell placement in narrow channels led to congestion.
  3. Macros of same partition which are placed far apart can cause timing violation.

What are different optimization techniques?

  1. Cell Sizing: Size up or down to meet timing/area.
  2. Vt Swapping
  3. Cloning: fanout reduction
  4. Buffering: Buffers are added in the middle of long net paths to reduce the delay.
  5. Logical restructuring: Breaking complex cells to simpler cells or vice versa
  6. Pin swapping

    What are the inputs for the CTS.

    1. CTS SDC
    2. Max Skew
    3. Max and Min Insertion Delay
    4. Max Transition, Capacitance, Fanout
    5. No of Buffer levels
    6. Buffer/Inverter list
    7. Clock Tree Routing Metal Layers
    8. Clock tree Root pin, Leaf Pin, Preserve pin, through pin and exclude pin

    What is Metal Fill

    1. Metal Density Rule helps to avoid Over Etching or Metal Erosion.
    2. Fill the empty metal tracks with metal shapes to meet the metal density rules.
    3. There are two types of Metal Fill
    4. Floating Metal Fill: Does not completely shield the aggressor nets, so SI will be there.
    5. Grounded Metal Fill: Completely shield the aggressor nets, less SI

    Why the Metal Fill is required

    1. If there is lot of gap between the routed metal layers (empty tracks), during the process of Etching the etching material used will fall more in this gap due to which Over Etching of existing metal occurs which may create opens. So in order to have uniform Metal Density across the chip, Dummy Metal is added in these empty tracks.

    What are the reasons for routing congestion

    1. Inefficient floorplan
    2. Macro placement or macro channels is not proper.
    3. Placement blockages not given
    4. No Macro to Macro channel space given.
    5. High cell density
    6. High local utilization
    7. High number of complex cells like AOI/OAI cells which has more pin count are placed together.
    8. Placement of std cells near macros
    9. Logic optimization is not properly done.
    10. Pin density is more on edge of block
    11. Buffers added too many while optimization
    12. IO ports are crisscrossed, it needs to be properly aligned in order.

    What are the different methods to reduce congestion.

    1. Review the floorplan/macro placements according to the block size and port placement.
    2. Add proper placement blockages in channels and around the macro boundaries.
    3. Reduce the local density using the percentage utilization/density screens.
    4. Cell padding is applied for high pin density cells, like AOI/OAI.
    5. Check and reorder scan chain if needed.
    6. Run the congestion driven placement with high effort.
    7. Check the power network is proper and on routing tract. If it is not on track, adjacent routing tracts may not be used, so it might lead to congestion

    Thursday, 7 April 2016

    POCV

    Advanced on-chip variation (AOCV) analysis reduces unnecessary pessimism by taking the design methodology and fabrication process variation into account. AOCV determines derating factors based on metrics of path logic depth and the physical distance traversed by a particular path. A longer path that has more gates tends to have less total variation because the random variations from gate to gate tend to cancel each other out. A path that spans a larger physical distance across the chip tends to have larger systematic variations. AOCV is less pessimistic than a traditional OCV analysis, which relies on constant derating factors that do not take path-specific metrics into account.

    The AOCV analysis determines path-depth and location-based bounding box metrics to calculate a context-specific AOCV derating factor to apply to a path, replacing the use of a constant derating factor.

    AOCV analysis works with all other PrimeTime features and affects all reporting commands. This solution works in both the Standard Delay Format (SDF)-based and the delay calculation based flows.



    PrimeTime ADV parametric on-chip variation (POCV) models the delay of an instance as a function of a variable that is specific to the instance. That is, the instance delay is parameterized as a function of the unique delay variable for the instance.


    POCV uses a statistical approach, but it doesn’t do a full SSTA analysis. Instead, it calculates delay variation by modeling the intrinsic cell delay and load parasitics (line resistance, line capacitance, and load capacitance) to determine both the mean and “sigma” (variation) of a logic stage. The cell delay can be further broken into an n-channel component and a p-channel component. They then assume that all the cells along a path have the same mean and sigma.



    This means that a given path doesn’t have to be analyzed stage-by-stage; the number of stages can be counted, with the basic stage delay mean and sigma then used to calculate the path delay and accumulated variation. They claim that this keeps the run times down to just over what standard STA tools require, far faster than SSTA. They also claim speedier execution and greater accuracy than AOCV, and no derating tables are required.


    POCV) is a technique that has been proposed as a means of reducing pessimism further by taking elements of SSTA and implementing them in a way that is less compute-intensive.
    POCV provides the following:

    •Statistical single-parameter derating for random variations

    •Single input format and characterization source for both AOCV and POCV table data

    •Nonstatistical timing reports

    •Limited statistical reporting (mean, sigma) for timing paths

    •Compatibility with existing PrimeTime functionality, except PrimeTime VX

    Compared to AOCV, POCV provides


    •Reduced pessimism gap between graph-based analysis and path-based analysis

    •Less overhead for incremental timing analysis

    Sunday, 3 April 2016

    Physical design sanity checks

    Sanity Checks in Physical Design Flow
    1. check_library
    2. check_timing
    3. report_constraint
    4. report_timing
    5. report_qor
    6. check_design
    7. check_legality

    check_library:

     check_library validates the libraries i.e., it performs consistency checks between logical and physical libraries, across logical libraries, and within physical libraries. This command checks library qualities in three main areas: Physical library quality Logic versus physical library consistency Logic versus logic library consistency

    check_timing 

    PNR tool wont optimize the paths which are not constrained. So we have to check any unconstrained paths are exist in the design. check_timing command reports unconstrained paths. If there are any unconstrained paths in the design, run the report_timing_requirements command to verify that the unconstrained paths are false paths.

    No clock_relative delay specified for input ports ____________

    Unconstrained_endpoints. _________________

    End-points are not constrained for maximum delay ___________________

    report_constraints

     It reports to check the following parameters. Worst Negative Slack (WNS) Total Negative Slack (TNS) Design Rule Constraint Violations

     report_timing

    report_timing displays timing information about a design. The report_timing command provides a report of timing information for the current design. By default, the report_timing command reports the single worst setup path in each clock group. 


    report_qor

     report_qor displays QoR information and statistics for the current design. This command reports timing-path group and cell count details, along with current design statistics such as combinational, noncombinational, and total area. The command also reports static power, design rule violations, and compile-time details.

    check_design

     check_design checks the current design for consistency. The check_design command checks the internal representation of the current design for consistency, and issues error and warning messages as appropriate. 

    a. inputs/Outputs 300

    b. Undriven outputs (LINT-5) 505

    c. Unloaded inputs (LINT-8) 162

    d. Feedthrough (LINT-29) 174

    e. Shorted outputs (LINT-31) 52

    f. Constant outputs (LINT-52) 24

    g. Cells 152

    h. Cells do not drive (LINT-1) 1

    i. Connected to power or ground (LINT-32) 118

    j. Nets connected to multiple pins on same cell (LINT-33) 33

    k. Nets 1226

    l. Unloaded nets (LINT-2) 721



    Error messages indicate design problems of such severity that the compile command does not accept the design Warning messages are informational and do not necessarily indicate design problems. However, these messages should be investigated.

    Warnings

    Potential problems detected by this command include Unloaded input ports or undriven output ports Nets without loads or drivers or with multiple drivers

    Cells or designs without inputs or outputs 
    Mismatched pin counts between an instance and its reference 
    Tristate buses with non-tristate drivers wire loops across hierarchies

     check_legality 

    reports overlap and cells placement related violation like orientation, overlaps etc.

    SPEF : Standard Parasitic Exchange Format

    SPEF (Standard Parasitic Exchange Format) is documented in chapter 9 of IEEE 1481-1999. Several methods of describing parasitics are documented, but we are discussing only few important one.
    General Syntax

    A typical SPEF file will have 4 main sections

    – a header section,
    – a name map section,
    – a top level port section and
    – the main parasitic description section.

    Generally, SPEF keywords are preceded with a *. For example, *R_UNIT, *NAME_MAP and *D_NET.


    Comments start anywhere on a line with // and run to the end of the line. Each line in a block of comments must start with //.
    Header Information

    The header section is 14 lines containing information about

    – the design name,
    – the parasitic extraction tool,
    – naming styles
    – and units.

    When reading SPEF, it is important to check the header for units as they vary across tools. By default, SPEF from Astro will be in pF and kOhm while SPEF from Star-RCXT will be in fF and Ohm.
    Name Map Section

    To reduce file size, SPEF allows long names to be mapped to shorter numbers preceded by a *. This mapping is defined in the name map section. For example:

    *NAME_MAP

    *509 F_C_EP2
    *510 F_C_EP3
    *511 TOP/BUF_ZCLK_2_pin_Z_1


    Later in the file, F_C_EP2 can be referred to by its name or by *509. Name mapping in SPEF is not required. Also, mapped and non-mapped names can appear in the same file. Typically, short names such as a pin named A will not be mapped as mapping would not reduce file size. You can write a script will map the numbers back into names. This will make SPEF easier to read, but greatly increase file size.
    Port Section

    The port section is simply a list of the top level ports in a design. They are also annotated as input, output or bidirect with an I, O or B. For example:

    *PORTS

    *1 I
    *2 I
    *3 O
    *4 O
    *5 O

    Parasitics

    Each extracted net will have a *D_NET section. This will usually consist of a *D_NET line, a *CONN section, a *CAP section, *RES section and a *END line. Single pin nets will not have a *RES section. Nets connected by abutting pins will not have a *CAP section.

    *D_NET regcontrol_top/GRC/n13345 1.94482

    *CONN


    *I regcontrol_top/GRC/U9743:E I *C 537.855 9150.11 *L 3.70000

    *I regcontrol_top/GRC/U9409:A I *C 540.735 9146.02 *L 5.40000

    *I regcontrol_top/GRC/U9407:Z O *C 549.370 9149.88 *D OR2M1P

    *CAP


    1 regcontrol_top/GRC/U9743:E 0.936057

    2 regcontrol_top/GRC/U9409:A regcontrol_top/GRC/U10716:Z 0.622675

    3 regcontrol_top/GRC/U9407:Z 0.386093

    *RES


    1 regcontrol_top/GRC/U9743:E regcontrol_top/GRC/U9407:Z 10.7916

    2 regcontrol_top/GRC/U9743:E regcontrol_top/GRC/U9409:A 8.07710

    3 regcontrol_top/GRC/U9409:A regcontrol_top/GRC/U9407:Z 11.9156

    *END

    The *D_NET line tells the net name and the net's total capacitance. This capacitance will be the sum of all the capacitances in the *CAP section.

    *CONN Section


    The *CONN section lists the pins connected to the net. A connection to a cell instance starts with a *I. A connection to a top level port starts with a *P.

    The syntax of the *CONN entries is:

    *I *C

    Where:
    – The pin name is the name of the pin.
    – The direction will be I, O or B for input, output or bidirect.
    – The xy coordinate will be the location of the pin in the layout.
    – For an input, the loading information will be *L and the pin's capacitance.
    – For an output, the driving information will be *D and the driving cell's type.
    – Coordinates for *P port entries may not be accurate because some extraction tools look for the physical location of the logical port (which does not exist) rather then the location of the corresponding pin.

    *CAP Section

     The *CAP section provides detailed capacitance information for the net. Entries in the *CAP section come in two forms, one for a capacitor lumped to ground and one for a coupled capacitor.
    A capacitor lumped to ground has three fields,
    – an identifying integer,
    – a node name and
    – the capacitance value of this node

    – e.g

    o 1 regcontrol_top/GRC/U9743:E 0.936057

    A coupling capacitor has four fields,
    – an identifying integer,
    – two node names and
    – The values of the coupling capacitor between these two nodes

    – E.g

    o 2 regcontrol_top/GRC/U9409:A regcontrol_top/GRC/U10716:Z 0.622675
    If netA is coupled to netB, the coupling capacitor will be listed in each net's *CAP section.

    *RES Section

     The *RES section provides the resistance network for the net.
    Entries in *RES section contain 4 fields,
    – an identifying integer,
    – two node names and
    – the resistance between these two nodes.

    – E.g

    o 1 regcontrol_top/GRC/U9743:E regcontrol_top/GRC/U9407:Z 10.7916
    The resistance network for a net can be very complex. SPEF can contain resistor loops or seemingly ridiculously huge resistors even if the layout is a simple point to point route. This is due how the extraction tool cuts nets into tiny pieces for extraction and then mathematically stitches them back together when writing SPEF.

    Parasitic Values The above examples show a single parasitic value for each capacitor or resistor. It is up to the parasitic extraction and delay calculation flow to decide which corner this value represents. SPEF also allows for min:typ:max values to be reported:
    1 regcontrol_top/GRC/U9743:E 0.936057:1.02342:1.31343

    The IEEE standard requires either 1 or 3 values to be reported. However, some tools will report min:max pairs and it is expected that tools may report many corners (corner1:corner2:corner3:corner4) in the future.

    Library Exchange Format (LEF)

    Library Exchange Format (LEF) is a specification for representing the physical layout of an integrate circuit in an ASCII format. It includes design rules and abstract information about the cells. LEF is used in conjunction with Design Exchange Format (DEF) to represent the complete physical layout of an integrated circuit while it is being designed.


    An ASCII data format, used to describe a standard cell library Includes the design rules for routing and the Abstract of the cells, no information about the internal netlist of the cells
    A LEF file contains the following sections:
    Technology:
    Layer
    Design rules,
    via definitions,
    Metal capacitance

     Site: Site extension
     Macros: cell descriptions, cell dimensions, layout of pins and blockages, capacitances.

    The technology is described by the Layer and Via statements. To each layer the following attributes may be associated:


    Type: Layer type can be routing, cut (contact), masterslice (poly, active),overlap.


     width/pitch/spacing rules
     direction
    resistance and capacitance per unit square
    antenna Factor

    Layers are defined in process order from bottom to top

    poly masterslice
    cc cut
    metal1 routing
    via cut
    metal2 routing
    via2 cut
    metal3 routing
    Cut Layer definition

    LAYER
    layername
    TYPE CUT ;
    SPACING Specifies the minimum spacing allowed between via cuts on the same net or different nets. This value can be overridden by the SAMENET SPACING statement (we are going to use this statement later)END layerName


    Implant Layer definition

    LAYER layerName
    TYPE IMPLANT ;
    SPACING minSpacing
    END layerName

    Defines implant layers in the design. Each layer is defined by assigning it a name and simple spacing and width rules. These spacing and width rules only affect the legal cell placements. These rules interact with the library methodology, detailed placement, and filler cell support.

    Masterslice or Overlap Layer definition
    LAYER layerName
    TYPE {MASTERSLICE | OVERLAP} ;

    Defines masterslice (nonrouting) or overlap layers in the design. Masterslice layers are typically polysilicon layers and are only needed if the cell MACROs have pins on the polysilicon layer.


    Routing Layer definition

    LAYER layerName
    TYPE ROUTING ;
    DIRECTION {HORIZONTAL | VERTICAL} ;
    PITCH distance;
    WIDTH defWidth;
    OFFSET distance ;
    SPACING minSpacing;

    RESISTANCE RPERSQ value ;

    Specifies the resistance for a square of wire, in ohms per square. The resistance of a wire can be defined as RPERSQU x wire length/wire width


    CAPACITANCE CPERSQDIST value ;

    Specifies the capacitance for each square unit, in picofarads per square micron. This is used to model wire-to-ground capacitance.

    Manufacturing Grid

    MANUFACTURINGGRID value ;

    Defines the manufacturing grid for the design. The manufacturing grid is used for geometry alignment. When specified, shapes and cells are placed in locations that snap to the manufacturing grid.

    Via

    VIA viaName
    DEFAULT
    TOPOFSTACKONLY
    FOREIGN foreignCellName [pt [orient]] ;
    RESISTANCE value ;

    {LAYER layerName ;
    {RECT pt pt ;} ...} ...
    END viaName


    Defines vias for usage by signal routers. Default vias have exactly three layers used:

    A cut layer, and two layers that touch the cut layer (routing or masterslice). The cut layer rectangle must be between the two routing or masterslice layer rectangles.

    Via Rule Generate
    VIARULE viaRuleName GENERATE
    LAYER routingLayerName ;
    { DIRECTION {HORIZONTAL | VERTICAL} ;
    OVERHANG overhang ;
    METALOVERHANG metalOverhang ;
    | ENCLOSURE overhang1 overhang2 ;}
    LAYER routingLayerName ;

    { DIRECTION {HORIZONTAL | VERTICAL} ;


    OVERHANG overhang ;
    METALOVERHANG metalOverhang ;
    | ENCLOSURE overhang1 overhang2 ;}


    LAYER cutLayerName ;
    RECT pt pt ;
    SPACING xSpacing BY ySpacing ;

    RESISTANCE resistancePerCut ;
    END viaRuleName


    Defines formulas for generating via arrays. Use the VIARULE GENERATE statement to cover special wiring that is not explicitly defined in the VIARULE statement.

    Same-Net Spacing
    SPACING
    SAMENET layerName layerName minSpace [STACK] ; ...
    END SPACING

    Defines the same-net spacing rules. Same-net spacing rules determine minimum spacing between geometries in the same net and are only required if same-net spacing is smaller than different-net spacing, or if vias on different layers have special stacking rules.


    Thesespecifications are used for design rule checking by the routing and verification tools.


    Spacing is the edge-to-edge separation, both orthogonal and diagonal.
    Site

    SITE siteName



    CLASS {PAD | CORE} ;


    [SYMMETRY {X | Y | R90} ... ;] (will discuss this later in macro definition)


    SIZE width BY height ;

    END siteName


    Macro

    MACRO macroName

    [CLASS { COVER [BUMP] |
     RING |
     BLOCK [BLACKBOX]

    | PAD [INPUT | OUTPUT |INOUT | POWER | SPACER | AREAIO]


    | CORE [FEEDTHRU | TIEHIGH | TIELOW | SPACER | ANTENNACELL]

    | ENDCAP {PRE | POST | TOPLEFT | TOPRIGHT | BOTTOMLEFT | BOTTOMRIGHT}


    }

    ;]


    [SOURCE {USER | BLOCK} ;]

    [FOREIGN foreignCellName [pt [orient]] ;] ...


    [ORIGIN pt ;]

    [SIZE width BY height ;]

    [SYMMETRY {X | Y | R90} ... ;]


    [SITE siteName ;]

    [PIN statement] ...


    [OBS statement] ...

    Macro Pin Statement

    PIN pinName

    FOREIGN foreignPinName [STRUCTURE [pt [orient] ] ] ;

    [DIRECTION {INPUT | OUTPUT [TRISTATE] | INOUT | FEEDTHRU} ;]


    [USE { SIGNAL | ANALOG | POWER | GROUND | CLOCK } ;]


    [SHAPE {ABUTMENT | RING | FEEDTHRU} ;]


    [MUSTJOIN pinName ;]

    {PORT


    [CLASS {NONE | CORE} ;]

    {layerGeometries} ...

    END} ...


    END pinName]
    Macro Obstruction Statement

    OBS

    { LAYER layerName [SPACING minSpacing | DESIGNRULEWIDTH value] ;


    RECT pt pt ;


    POLYGON pt pt pt pt ... ;

    END

    Thursday, 31 March 2016

    Verilog interview question part3

    How can I override variables in an automatic task?

    By default, all variables in a module are static, i.e., these variables will be replicated for all instances of a module. However, in the case of task and function, either the task/function itself or the variables within them can be defined as static or automatic. The following explains the inferences through different combinations of the task/function and/or its variables, declared either as static or automatic:

     





    No automatic definition of task/function or its variables This is the Verilog-1995 format, wherein the task/function and its variables were implicitly static. The variables are allocated only once. Without the mention of the automatic keyword, multiple calls to task/function will override their variables.

    Static task/function definition

    System Verilog introduced the keyword static. When a task/function is explicitly defined as static, then its variables are allocated only once, and can be overridden. This scenario is exactly the same scenario as before.

    Automatic task/function definition

    From Verilog-2001 onwards, and included within SystemVerilog, when the task/function is declared as automatic, its variables are also implicitly automatic. Hence, during multiple calls of the task/function, the variables are allocated each time and replicated without any overwrites.

    Static task/function and automatic variables

    SystemVerilog also allows the use of automatic variables in a static task/function. Those without any changes to automatic variables will remain implicitly static. This will be useful in scenarios wherein the implicit static variables need to be initialised before the task call, and the automatic variables can be allocated each time.

    Automatic task/function and static variables

    SystemVerilog also allows the use of static variables in an automatic task/function. Those without any changes to static variables will remain implicitly automatic. This will be useful in scenarios wherein the static variables need to be updated for each call, whereas the rest can be allocated each time.

    What are the rules governing usage of a Verilog function?

    The following rules govern the usage of a Verilog function construct:

    A function cannot advance simulation-time, using constructs like #, @. etc.
    A function shall not have nonblocking assignments.
    A function without a range defaults to a one bit reg for the return value.

    It is illegal to declare another object with the same name as the function in the scope where the function is declared.

    How do I prevent selected parameters of a module from being overridden during instantiation?

    If a particular parameter within a module should be prevented from being overridden, then it should be declared using the localparam construct, rather than the parameter construct. The localparam construct has been introduced from Verilog-2001. Note that a localparam variable is fully identical to being defined as a parameter, too. In the following example, the localparam construct is used to specify num_bits, and hence trying to override it directly gives an error message.





    Note, however, that, since the width and depth are specified using the parameter construct, they can be overridden during instantiation or using defparam, and hence will indirectly override the num_bits values. In general, localparam constructs are useful in defining new and localized identifiers whose values are derived from regular parameters.

    What are the pros and cons of specifying the parameters using the defparam construct vs. specifying during instantiation?

    The advantages of specifying parameters during instantiation method are:


    All the values to all the parameters don’t need to be specified. Only those parameters that are assigned the new values need to be specified. The unspecified parameters will retain their default values specified within its module definition.

    The order of specifying the parameter is not relevant anymore, since the parameters are directly specified and linked by their name.

    The disadvantage of specifying parameter during instantiation are:

    This has a lower precedence when compared to assigning using defparam.
    The advantages of specifying parameter assignments using defparam are:
    This method always has precedence over specifying parameters during instantiation.
    All the parameter value override assignments can be grouped inside one module and together in one place, typically in the top-level testbench itself.

    When multiple defparams for a single parameter are specified, the parameter takes the value of the last defparam statement encountered in the source if, and only if, the multiple defparam’s are in the same file. If there are defparam’s in different files that override the same parameter, the final value of the parameter is indeterminate.

    The disadvantages of specifying parameter assignments using defparam are:


    The parameter is typically specified by the scope of the hierarchies underneath which it exists. If a particular module gets ungrouped in its hierarchy, [sometimes necessary during synthesis], then the scope to specify the parameter is lost, and is unspecified. B


    For example, if a module is instantiated in a simulation testbench, and its internal parameters are then overridden using hierarchical defparam constructs (For example, defparam U1.U_fifo.width = 32;). Later, when this module is synthesized, the internal hierarchy within U1 may no longer exist in the gate-level netlist, depending upon the synthesis strategy chosen. Therefore post-synthesis simulation will fail on the hierarchical defparam override.

    Can there be full or partial no-connects to a multi-bit port of a module during its instantiation?

    No. There cannot be full or partial no-connects to a multi-bit port of a module during instantiation

    What happens to the logic after synthesis, that is driving an unconnected output port that is left open (, that is, noconnect) during its module instantiation?

    An unconnected output port in simulation will drive a value, but this value does not propagate to any other logic. In synthesis, the cone of any combinatorial logic that drives the unconnected output will get optimized away during boundary optimisation, that is, optimization by synthesis tools across hierarchical boundaries.

    How is the connectivity established in Verilog when connecting wires of different widths?

    When connecting wires or ports of different widths, the connections are right-justified, that is, the rightmost bit on the RHS gets connected to the rightmost bit of the LHS and so on, until the MSB of either of the net is reached.

    Can I use a Verilog function to define the width of a multi-bit port, wire, or reg type?


    The width elements of ports, wire or reg declarations require a constant in both MSB and LSB. Before Verilog 2001, it is a syntax error to specify a function call to evaluate the value of these widths. For example, the following code is erroneous before Verilog 2001 version.

    reg [ port1(val1:vla2) : port2 (val3:val4)] reg1;
    In the above example, get_high and get_low are both function calls of evaluating a constant result for MSB and LSB respectively. However, Verilog-2001 allows the use of a function call to evaluate the MSB or LSB of a width declaration

    What is the implication of a combinatorial feedback loops in design testability?


    The presence of feedback loops should be avoided at any stage of the design, by periodically checking for it, using the lint or synthesis tools. The presence of the feedback loop causes races and hazards in the design, and 104 RTL Design

    leads to unpredictable logic behavior. Since the loops are delay-dependent, they cannot be tested with any ATPG algorithm. Hence, combinatorial loops should be avoided in the logic.

    What are the various methods to contain power during RTL coding?

    Any switching activity in a CMOS circuit creates a momentary current flow from VDD to GND during logic transition, when both N and P type transistors are ON, and, hence, increases power consumption.


    The most common storage element in the designs being the synchronous FF, its output can change whenever its data input toggles, and the clock triggers. Hence, if these two elements can be asserted in a controlled fashion, so that the data is presented to the D input of the FF only when required, and the clock is also triggered only when required, then it will reduce the switching activity, and, automatically the power.

    The following bullets summarize a few mechanisms to reduce the power consumption:

    Reduce switching of the data input to the Flip-Flops.

    Why we do gate level simulations?

    Since scan and other test structures are added during and after synthesis, they are not checked by the rtl simulations and therefore need to be verified by gate level simulation.
    Static timing analysis tools do not check asynchronous interfaces, so gate level simulation is required to look at the timing of these interfaces.
    Careless wildcards in the static timing constraints set false path or mutlicycle path constraints where they don't belong.
    Design changes, typos, or misunderstanding of the design can lead to incorrect false paths or multicycle paths in the static timing constraints.
    Using create_clock instead of create_generated_clock leads to incorrect static timing between clock domains.
     Gate level simulation can be used to collect switching factor data for power estimation.
    X's in RTL simulation can be optimistic or pessimistic. The best way to verify that the design does not have any unintended dependence on initial conditions is to run gate level simulation.
     It's a nice "warm fuzzy" that the design has been implemented correctly.

     Say if I perform Formal Verification say Logical Equivalence across Gatelevel netlists(Synthesis and post routed netlist). Do you still see a reason behind GLS.?


    If we have verified the Synthesized netlist functionality is correct when compared to RTL and when we compare the Synthesized netlist versus Post route netlist logical Equivalence then I think we may not require GLS after P & R. But how do we ensure on Timing . To my knowledge Formal Verification Logical Equivalence Check does not perform Timing checks and dont ensure that the design will work on the operating frequency , so still I would go for GLS after post route database.

    An AND gate and OR gate are given inputs X & 1 , what is expected output?

    AND Gate output will be X
    OR Gate output will be 1.

    What is difference between NMOS & RNMOS?

    RNMOS is resistive NMOS that is in simulation strength will decrease by one unit , please refer to below Diagram.





    Tell something about modeling delays in verilog?


    Verilog can model delay types within its specification for gates and buffers. Parameters that can be modelled are T_rise, T_fall and T_turnoff. To add further detail, each of the three values can have minimum, typical and maximum values
    T_rise, t_fall and t_off


    Delay modelling syntax follows a specific discipline;
    gate_type #(t_rise, t_fall, t_off) gate_name (paramters);
    When specifiying the delays it is not necessary to have all of the delay values specified. However, certain rules are followed.
    and #(3) gate1 (out1, in1, in2);
    When only 1 delay is specified, the value is used to represent all of the delay types, i.e. in this example, t_rise = t_fall = t_off = 3.


    or #(2,3) gate2 (out2, in3, in4);
    When two delays are specified, the first value represents the rise time, the second value represents the fall time. Turn off time is presumed to be 0.


    buf #(1,2,3) gate3 (out3, enable, in5);
    When three delays are specified, the first value represents t_rise, the second value represents t_fall and the last value the turn off time.
    Min, typ and max values

    The general syntax for min, typ and max delay modelling is;

    gate_type #(t_rise_min:t_ris_typ:t_rise_max, t_fall_min:t_fall_typ:t_fall_max, t_off_min:t_off_typ:t_off_max) gate_name (paramteters);


    Similar rules apply for th especifying order as above. If only one t_rise value is specified then this value is applied to min, typ and max. If specifying more than one number, then all 3 MUST be scpecified. It is incorrect to specify two values as the compiler does not know which of the parameters the value represents.


    An example of specifying two delays;
    and #(1:2:3, 4:5:6) gate1 (out1, in1, in2);
    This shows all values necessary for rise and fall times and gives values for min, typ and max for both delay types.


    Another acceptable alternative would be;
    or #(6:3:9, 5) gate2 (out2, in3, in4);
    Here, 5 represents min, typ and max for the fall time.


    N.B. T_off is only applicable to tri-state logic devices, it does not apply to primitive logic gates because they cannot be turned off.

     

    What are conditional path delays?

    Conditional path delays, sometimes called state dependent path delays, are used to model delays which are dependent on the values of the signals in the circuit. This type of delay is expressed with an if conditional statement. The operands can be scalar or vector module input or inout ports, locally defined registers or nets, compile time constants (constant numbers or specify block parameters), or any bit-select or part-select of these. The conditional statement can contain any bitwise, logical, concatenation, conditional, or reduction operator. The else construct cannot be used.//Conditional path delays

    Draw a 2:1 mux using switches and verilog code for it?

    1-bit 2-1 Multiplexer





    This circuit assigns the output out to either inputs in1 or in2 depending on the low or high values of ctrl respectively.// Switch-level description of a 1-bit 2-1 multiplexer
    // ctrl=0, out=in1; ctrl=1, out=in2

    module mux21_sw (out, ctrl, in1, in2);
    output out; // mux output
    input ctrl, in1, in2; // mux inputs
    wire w; // internal wire

    inv_sw I1 (w, ctrl); // instantiate inverter module
    cmos C1 (out, in1, w, ctrl); // instantiate cmos switches
    cmos C2 (out, in2, ctrl, w);

    endmodule


    An inverter is required in the multiplexer circuit, which is instantiated from the previously defined module.


    Two transmission gates, of instance names C1 and C2, are implemented with the cmos statement, in the format cmos [instancename]([output],[input],[nmosgate],[pmosgate]). Again, the instance name is optional.

    What are the synthesizable gate level constructs?





    The above table gives all the gate level constructs of only the constructs in first two columns are synthesizable.

    Reduce the clock switching of the Flip-Flops.
    Have area reduction techniques within the chip, since the number of gates/Flip-Flops that toggle can be reduced.

    Learn More ==>

    Physical design part1
    Physical design part2
    Physical design part3
    Placement

    verilog interview question part1
    verilog interview question part2
    verilog interview question part3

    Wednesday, 30 March 2016

    verilog interview question part2

     Why is it that "if (2'b01 & 2'b10)..." doesn't run the true case?

    This is a popular coding error. You used the bit wise AND operator (&) where you meant to use the logical AND operator (&&).


    What are Different types of Verilog Simulators ?

    There are mainly two types of simulators available.
    Event Driven
    Cycle Based

    Event-based Simulator:

    This Digital Logic Simulation method sacrifices performance for rich functionality: every active signal is calculated for every device it propagates through during a clock cycle. Full Event-based simulators support 4-28 states; simulation of Behavioral HDL, RTL HDL, gate, and transistor representations; full timing calculations for all devices; and the full HDL standard. Event-based simulators are like a Swiss Army knife with many different features but none are particularly fast.

    Cycle Based Simulator:

    This is a Digital Logic Simulation method that eliminates unnecessary calculations to achieve huge performance gains in verifying Boolean logic:


    1.) Results are only examined at the end of every clock cycle; and

    2.) The digital logic is the only part of the design simulated (no timing calculations). By limiting the calculations, Cycle based Simulators can provide huge increases in performance over conventional Event-based simulators.

    Cycle based simulators are more like a high speed electric carving knife in comparison because they focus on a subset of the biggest problem: logic verification.

    Cycle based simulators are almost invariably used along with Static Timing verifier to compensate for the lost timing information coverage.



    What is Constrained-Random Verification ?

    As ASIC and system-on-chip (SoC) designs continue to increase in size and complexity, there is an equal or greater increase in the size of the verification effort required to achieve functional coverage goals. This has created a trend in RTL verification techniques to employ constrained-random verification, which shifts the emphasis from hand-authored tests to utilization of compute resources. With the corresponding emergence of faster, more complex bus standards to handle the massive volume of data traffic there has also been a renewed significance for verification IP to speed the time taken to develop advanced testbench environments that include randomization of bus traffic.


    Directed-Test Methodology

    Building a directed verification environment with a comprehensive set of directed tests is extremely time-consuming and difficult. Since directed tests only cover conditions that have been anticipated by the verification team, they do a poor job of covering corner cases. This can lead to costly re-spins or, worse still, missed market windows.


    Traditionally verification IP works in a directed-test environment by acting on specific testbench commands such as read, write or burst to generate transactions for whichever protocol is being tested. This directed traffic is used to verify that an interface behaves as expected in response to valid transactions and error conditions. The drawback is that, in this directed methodology, the task of writing the command code and checking the responses across the full breadth of a protocol is an overwhelming task. The verification team frequently runs out of time before a mandated tape-out date, leading to poorly tested interfaces. However, the bigger issue is that directed tests only test for predicted behavior and it is typically the unforeseen that trips up design teams and leads to extremely costly bugs found in silicon.


    Constrained-Random Verification Methodology

    The advent of constrained-random verification gives verification engineers an effective method to achieve coverage goals faster and also help find corner-case problems. It shifts the emphasis from writing an enormous number of directed tests to writing a smaller set of constrained-random scenarios that let the compute resources do the work. Coverage goals are achieved not by the sheer weight of manual labor required to hand-write directed tests but by the number of processors that can be utilized to run random seeds. This significantly reduces the time required to achieve the coverage goals.

    Scoreboards are used to verify that data has successfully reached its destination, while monitors snoop the interfaces to provide coverage information. New or revised constraints focus verification on the uncovered parts of the design under test. As verification progresses, the simulation tool identifies the best seeds, which are then retained as regression tests to create a set of scenarios, constraints, and seeds that provide high coverage of the design.

    Difference between blocking and nonblocking assignments

    While both blocking and nonblocking assignments are procedural assignments, they differ in behaviour with respect to simulation and logic

    synthesis as follows:







    How can I model a bi-directional net with assignments influencing both source and destination?

    The assign statement constitutes a continuous assignment. The changes on the RHS of the statement immediately reflect on the LHS net. However, any changes on the LHS don't get reflected on the RHS. For example, in the following statement, changes to the rhs net will update the lhs net, but not vice versa.


    System Verilog has introduced a keyword alias, which can be used only on nets to have a two-way assignment. For example, in the following code, any changes to the rhs is reflected to the lh s , and vice versa.
    wire rhs , lhs
    assign lhs=rhs;


    System Verilog has introduced a keyword alias, which can be used only on nets to have a two-way assignment. For example, in the following code, any changes to the rhs is reflected to the lh s , and vice versa.

    module test ();
    wire rhs,lhs;

    alias lhs=rhs;

    In the above example, any change to either side of the net gets reflected on the other side.

    Are tasks and functions re-entrant, and how are they different from static task and function calls?

    In Verilog-95, tasks and functions were not re-entrant. From Verilog version 2001 onwards, the tasks and functions are reentrant. The reentrant tasks have a keyword automatic between the keyword task and the name of the task. The presence of the keyword automatic replicates and allocates the variables within a task dynamically for each task entry during concurrent task calls, i.e., the values don’t get overwritten for each task call. Without the keyword, the variables are allocated statically, which means these variables are shared across different task calls, and can hence get overwritten by each task call.





    Read More ==>
    Physical design part1
    Physical design part2
    Physical design part3
    Placement

    verilog interview question part1
    verilog interview question part2
    verilog interview question part3

    Thursday, 24 March 2016

    verilog interview question part1

    How to write FSM is verilog?

    There are mainly 4 ways 2 write FSM code
    1) using 1 process where all input decoder, present state, and output decoder r combine in one process.
    2) using 2 process where all comb ckt and sequential ckt separated in different process
    3) using 2 process where input decoder and persent state r combine and output decoder seperated in other process
    4) using 3 process where all three, input decoder, present state and output decoder r separated in 3 process.

    What is difference between freeze deposit and force?

    $deposit(variable, value);

    This system task sets a Verilog register or net to the specified value. variable is the register or net to be changed; value is the new value for the register or net. The value remains until there is a subsequent driver transaction or another $deposit task for the same register or net. This system task operates identically to the ModelSim

    force -deposit command

    The force command has -freeze, -drive, and -deposit options. When none of these is specified, then -freeze is assumed for unresolved signals and -drive is assumed for resolved signals. This is designed to provide compatibility with force files. But if you prefer -freeze as the default for both resolved and unresolved signals.

    Will case infer priority register if yes how give an example?

    yes case can infer priority register depending on coding style reg r;


    // Priority encoded mux,

    always @ (a or b or c or select2)
    begin
    r = c;
    case (select2)
    2'b00: r = a;
    2'b01: r = b;

    endcase

    end

    Casex,z difference,which is preferable,why?

    CASEZ :
    Special version of the case statement which uses a Z logic value to represent don't-care bits. CASEX :
    Special version of the case statement which uses Z or X logic values to represent don't-care bits.
    CASEZ should be used for case statements with wildcard don’t cares, otherwise use of CASE is required; CASEX should never be used.

    This is because:

    Don’t cares are not allowed in the "case" statement. Therefore casex or casez are required. Casex will automatically match any x or z with anything in the case statement. Casez will only match z’s -- x’s require an absolute match.


    Given the following Verilog code, what value of "a" is displayed?


    always @(clk) begin

    a = 0;

    #0 a <= 1;
    $display(a);
    end

    This is a tricky one! Verilog scheduling semantics basically imply a, please run this code and get answer: write in the comment section


    four-level deep queue for the current simulation time:

    1: Active Events (blocking statements)

    2: Inactive Events (#0 delays, etc)

    3: Non-Blocking Assign Updates (non-blocking statements)

    4: Monitor Events ($display, $monitor, etc).

    Since the "a = 0" is an active event, it is scheduled into the 1st "queue". The "a <= 1" is a non-blocking event, so it's placed into the 3rd queue.


    Finally, the display statement is placed into the 4th queue. Only events in the active queue are completed this sim cycle, so the "a = 0" happens, and then the display shows a = 0. If we were to look at the value of a in the next sim cycle, it would show 1.




    What is the difference between the following two lines of Verilog code?


    #5 a = b;

    a = #5 b;



    #5 a = b; Wait five time units before doing the action for "a = b;".


    a = #5 b; The value of b is calculated and stored in an internal temp register,After five time units, assign this stored value to a.


    What is the difference between -->

    c = foo ? a : b;
    and
    if (foo) c = a;
    else c = b;

    The ? merges answers if the condition is "x", so for instance if foo = 1'bx, a = 'b10, and b = 'b11, you'd get c = 'b1x. On the other hand, if treats Xs or Zs as FALSE, so you'd always get c = b.


    What are Intertial and Transport Delays ??

    What does `timescale 1 ns/ 1 ps signify in a verilog code?


    'timescale directive is a compiler directive.It is used to measure simulation time or delay time. Usage : `timescale / reference_time_unit : Specifies the unit of measurement for times and delays. time_precision: specifies the precision to which the delays are rounded off.

     What is the difference between === and == ?


    output of "==" can be 1, 0 or X.

    output of "===" can only be 0 or 1.

    When you are comparing 2 nos using "==" and if one/both the numbers have one or more bits as "x" then the output would be "X" . But if use "===" outpout would be 0 or 1.

    e.g A = 3'b1x0

    B = 3'b10x

    A == B will give X as output.

    A === B will give 0 as output.

    "==" is used for comparison of only 1's and 0's .It can't compare Xs. If any bit of the input is X output will be X

    "===" is used for comparison of X also.


    How to generate sine wav using verilog coding style?

    A: The easiest and efficient way to generate sine wave is using CORDIC Algorithm.

     What is the difference between wire and reg?


    Net types: (wire,tri)Physical connection between structural elements. Value assigned by a continuous assignment or a gate output. Register type: (reg, integer, time, real, real time) represents abstract data storage element. Assigned values only within an always statement or an initial statement. The main difference between wire and reg is wire cannot hold (store) the value when there no connection between a and b like a-&gt;b, if there is no connection in a and b, wire loose value. But reg can hold the value even if there in no connection. Default values:wire is Z,reg is x.




    How do you implement the bi-directional ports in Verilog HDL?


    module bidirec (oe, clk, inp, outp, bidir);

    // Port Declaration

    input oe;

    input clk;

    input [7:0] inp;

    output [7:0] outp;

    inout [7:0] bidir;

    reg [7:0] a;

    reg [7:0] b;

    assign bidir = oe ? a : 8'bZ ;

    assign outp = b;

    // Always Construct

    always @ (posedge clk)

    begin

    b <= bidir;

    a<= inp;

    end

    endmodule

    what is verilog case (1) ?

    wire [3:0] x;

    always @(...) begin

    case (1'b1)

    x[0]: SOMETHING1;

    x[1]: SOMETHING2;

    x[2]: SOMETHING3;

    x[3]: SOMETHING4;

    endcase

    end

    The case statement walks down the list of cases and executes the first one that matches. So here, if the lowest 1-bit of x is bit 2, then something3 is the statement that will get executed (or selected by the logic).

    GO to -->

    Physical design part1
    Physical design part2
    Physical design part3
    Placement

    verilog interview question part1
    verilog interview question part2
    verilog interview question part3

    physical design interview2


    Explain the flow of physical design and inputs and outputs for each step in flow.

    The physical design flow is generally explained in the Figure
     (1.). In each section of the flow EDA tools available from the two main EDA companies-Synopsys and Cadence is also listed. In each and every step of the flow timing and power analysis can be carried out. If timing and power requirements are not met then either the whole flow has to be re-exercised or going back one or two steps and optimizing the design or incremental optimization may meet the requirements





    What is cell delay and net delay?

    Gate delay

    Transistors within a gate take a finite time to switch. This means that a change on the input of a gate takes a finite time to cause a change on the output.[Magma]

    Gate delay =function of(i/p transition time, Cnet+Cpin).

    Cell delay is also same as Gate delay.

    Cell delay


    For any gate it is measured between 50% of input transition to the corresponding 50% of output transition.

    Intrinsic delay


    Intrinsic delay is the delay internal to the gate. Input pin of the cell to output pin of the cell.

    It is defined as the delay between an input and output pair of a cell, when a near zero slew is applied to the input pin and the output does not see any load condition.It is predominantly caused by the internal capacitance associated with its transistor.

    This delay is largely independent of the size of the transistors forming the gate because increasing size of transistors increase internal capacitors.

    Net Delay (or wire delay)


    The difference between the time a signal is first applied to the net and the time it reaches other devices connected to that net.

    It is due to the finite resistance and capacitance of the net.It is also known as wire delay.

    Wire delay =fn(Rnet , Cnet+Cpin)


    What are delay models and what is the difference between them?

    1. Linear Delay Model (LDM)
    2. Non Linear Delay Model (NLDM)


    What is wire load model?


    Wire load model is NLDM which has estimated R and C of the net.


    Why higher metal layers are preferred for Vdd and Vss?

    1. Because it has less resistance and hence leads to less IR drop.


    What is logic optimization and give some methods of logic optimization.

    1. Upsizing
    2. Downsizing
    3. Buffer insertion
    4. Buffer relocation
    5. Dummy buffer placement


    What is the significance of negative slack?

    1. negative slack==&gt; there is setup voilation==&gt; deisgn can fail


    What is signal integrity? How it affects Timing?

    1. IR drop, Electro Migration (EM), Crosstalk, Ground bounce are signal integrity issues.
    2. If Idrop is more==&gt;delay increases.
    3. crosstalk==&gt;there can be setup as well as hold voilation.


    What is IR drop? How to avoid? How it affects timing?

    1. There is a resistance associated with each metal layer. This resistance consumes power causing voltage drop i.e.IR drop.
    2. If IR drop is more==&gt;delay increases.


    What is EM and it effects?

    1. Due to high current flow in the metal atoms of the metal can displaced from its origial place. When it happens in larger amount the metal can open or bulging of metal layer can happen. This effect is known as Electro Migration.
    2. Affects: Either short or open of the signal line or power line.


    What are types of routing?

    1. Global Routing
    2. Track Assignment
    3. Detail Routing


    What is latency? Give the types?

    Source Latency

    1. It is known as source latency also. It is defined as "the delay from the clock origin point to the clock definition point in the design".
    2. Delay from clock source to beginning of clock tree (i.e. clock definition point).
    3. The time a clock signal takes to propagate from its ideal waveform origin point to the clock definition point in the design.

    Network latency


    1. It is also known as Insertion delay or Network latency. It is defined as "the delay from the clock definition point to the clock pin of the register".
    2. The time clock signal (rise or fall) takes to propagate from the clock definition point to a register clock pin.


    What is track assignment?

    1. Second stage of the routing wherein particular metal tracks (or layers) are assigned to the signal nets.


    What is congestion?

    1. If the number of routing tracks available for routing is less than the required tracks then it is known as congestion.


    Whether congestion is related to placement or routing?

    1. Routing


    What are clock trees?

    1. Distribution of clock from the clock source to the sync pin of the registers.


    What are clock tree types?

    1. H tree, Balanced tree, X tree, Clustering tree, Fish bone


    What is cloning and buffering?

    1. Cloning is a method of optimization that decreases the load of a heavily loaded cell by replicating the cell.
    2. Buffering is a method of optimization that is used to insert beffers in high fanout nets to decrease the dealy

    Learn More ==>
    Physical design part1
    Physical design part2
    Physical design part3
    Placement

    verilog interview question part1
    verilog interview question part2
    verilog interview question part3