This blog is a continuation of a sequence of blogs that present the highlights from the 2018 Wilson Research Group Functional Verification Study (for a background on the study, click here).This blog, which discusses the issue of study bias and what we did to address these concerns, is basically a repeat of this topic from my 2014 and 2016 blog series. However, there are a few observations related specifically to our 2018 study worth mentioning. In addition, I am adding this blog here for completeness.
First, we have noticed a significant increase over the past two studies in terms of projects working on designs less than 500k gates. In general, these smaller designs are often associated with sensor designs for IoT and Automotive. An increase in smaller designs can introduce some interesting results in the findings. The reason for this is that projects working on smaller designs are often less mature with their functional verification processes. Hence, this can affect the trend data in that it might appear that adoption has leveled off or reversed when in reality the increased number of smaller designs participating in the study are biasing or influencing the final results. I will point out these biases in later blogs.
Next, to compare trends between multiple studies, it is critical that the study be balanced and consistent with its makeup from previous studies. Upon our launch of the 2018 study we initially received a poor response rate from Japan. That is, the percentage makeup of the study initially was out of balance with our previous study and not what we would expect in terms of geographical makeup for design projects. We addressed this problem by sending out multiple reminders to the Japan study pool participants, and ultimately received the response rate consistent with our previous study.
An out of balance study will result in trend bias across studies. Not only in terms of regional participation, but by job title. For example, if a study concerning women’s health issue were made up of 75 percent men, it would yield different findings from the same study made up of 75 percent women. Hence, in addition to regional participation, we carefully monitored job title participation to insure it was balanced with our previous studies.
MINIMIZING STUDY BIAS
When architecting a study, three main concerns must be addressed to ensure valid results: sample validity bias, non-response bias, and stakeholder bias. Each of these concerns is discussed in the following sections, as well as the steps we took to minimize these bias concerns.
Sample Validity Bias
To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. An example of a biased study would be when a technical conference surveys its participants. The data might raise some interesting questions, but unfortunately, it does not represent members of the population that were unable to participant in the conference. The same bias can occur if a journal or online publication limits its surveys to only its subscribers.A classic example of sample validity bias is the famous Literary Digest poll in the 1936 United States presidential election, where the magazine surveyed over two million people. This was a huge study for this period in time. The sampling frame of the study was chosen from the magazine’s subscriber list, phone books, and car registrations. However, the problem with this approach was that the study did not represent the actual voter population since it was a luxury to have a subscription to a magazine, or a phone, or a car during The Great Depression. As a result of this biased sample, the poll inaccurately predicted that Republican Alf Landon versus the Democrat Franklin Roosevelt would win the 1936 presidential election.For our study, we carefully chose a broad set of independent lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.
Non-response bias in a study occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. For example, spam and unsolicited mail filters remove an individual from the possibility of receiving an invitation to participate in a study, which can bias results. It is important to validate sufficient responses occurred across all lists that make up the sample frame. Hence, we reviewed the final results to ensure that no single list of respondents that made up the sample frame dominated the final results.Another potential non-response bias is due to lack of language translation, which we learned during our 2012 study. The 2012 study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions:
- We translated both the invitation and the survey into Japanese.
- We acquired additional engineering lists directly from Japan to augment our existing survey invitation list.
This resulted in a balanced representation from Japan. Based on that experience, we took the same approach to solve the language problem for the 2014 study.
Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online study survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey questions, preventing someone from taking the study multiple times or sharing the invitation with someone else.
2010 Study Bias
While architecting the 2012 study, we did discover a non-response bias associated with the 2010 study. Although multiple lists across multiple market segments and across multiple regions of the world were used during the 2010 study, we discovered that a single list dominated the responses, which consisted of participants who worked on more advanced projects and whose functional verification processes tend to be mature. Hence, for this series of blogs we have decided not to publish any of the 2010 results as part of verification technology adoption trend analysis.The 2007, 2012, 2014, 2016, and 2018 studies were well balance and did not exhibit the non-response bias previously described for the 2010 data. Hence, we have confidence in talking about general industry trends presented in this series of blogs.
In my next blog (click here), I begin Part 1 of the 2018 Wilson Research Group Functional Verification Study focused on FPGA design trends.
Quick links to the 2018 Wilson Research Group Study results
- Prologue: The 2018 Wilson Research Group Functional Verification Study
- Understanding and Minimizing Study Bias (2018 Study)
- Part 1 – FPGA Design Trends
- Part 2 – FPGA Verification Effectiveness Trends
- Part 3 – FPGA Verification Effort Trends
- Part 4 – FPGA Verification Effort Trends (Continued)
- Part 5 – FPGA Verification Technology Adoption Trends
- Part 6 – FPGA Verification Language and Library Adoption Trends
- Part 7 – IC/ASIC Design Trends
- Part 8 – IC/ASIC Resource Trends
- Part 9 – IC/ASIC Verification Technology Adoption Trends
- Part 10 – IC/ASIC Language and Library Adoption Trends
- Part 11 – IC/ASIC Power Management Trends
- Part 12 – IC/ASIC Verification Results Trends
- Conclusion: The 2018 Wilson Research Group Functional