{"id":5721,"date":"2022-01-11T16:42:41","date_gmt":"2022-01-11T21:42:41","guid":{"rendered":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/?p=5721"},"modified":"2026-03-26T07:06:40","modified_gmt":"2026-03-26T11:06:40","slug":"ramping-up-machine-learning-and-vision-based-automation-with-synthetic-data","status":"publish","type":"post","link":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/ramping-up-machine-learning-and-vision-based-automation-with-synthetic-data\/","title":{"rendered":"Ramping up machine learning and vision-based automation with synthetic data"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\"><strong>Use synthetic data to accelerate machine learning quickly and easily for vision-based automation systems<\/strong><\/h3>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Images.png\" alt=\"SynthAI Image Collage\" class=\"wp-image-5797\" width=\"724\" height=\"540\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Images.png 965w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Images-600x448.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Images-768x573.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Images-900x672.png 900w\" sizes=\"auto, (max-width: 724px) 100vw, 724px\" \/><\/figure><\/div>\n\n\n<p>Synthetic data is about to transform artificial intelligence. Today, machine learning is used for a variety of vision-based automation use cases like robotic bin picking, sorting, palletizing, quality inspection, and others. \u00a0While usage of machine learning for vision-based automation is growing, many industries face challenges and struggle to implement it within their computer vision applications. \u00a0This is due in large part to the need to collect many images and the challenges associated with accurately annotating the different products within those images.<\/p>\n\n\n\n<p>One of the latest trends in this domain is to utilize <a href=\"https:\/\/en.wikipedia.org\/wiki\/Synthetic_data\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">synthetic data<\/span><\/a> to speed up the data collection and training process.  Synthetic data is typically regarded as any data that is generated by a computer simulation.<\/p>\n\n\n\n<p>However, utilizing synthetic data for vision use cases requires expertise in synthetic image generation and can be complex, time consuming, and expensive. &nbsp;In addition, while some techniques and best practices for employing a machine learning model trained with synthetic data in real life already exist, these techniques are not yet commonly practiced.<\/p>\n\n\n\n<p>There needs to be an efficient method to bridge the skills traditionally required for a vision system such that it can be trained and deployed. &nbsp;Such skills include data collection and annotation, machine learning model training and validation, and integration into the complete automation system.<\/p>\n\n\n\n<p>Providing an automated way to address the above tasks is key to scaling up the technology and making it accessible and cost-effective. &nbsp;The good news is that <a href=\"https:\/\/synth.ai.sws.siemens.com\/?utm_campaign=blog_synthai\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">there is a way to do it<\/span><\/a>!&nbsp; Read on to learn how.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-style-default is-layout-flow wp-block-quote-is-layout-flow\"><p><strong><em>By 2024, 60% of the data used for the de\u00advel\u00adop\u00adment of AI and an\u00ada\u00adlyt\u00adics projects will be syn\u00adthet\u00adi\u00adcally generated<\/em>.<\/strong><\/p><cite><a href=\"https:\/\/www.wsj.com\/articles\/fake-it-to-make-it-companies-beef-up-ai-models-with-synthetic-data-11627032601\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">WSJ<\/span><\/a>, quoting Gartner Inc.<\/cite><\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The potential of synthetic data for machine learning-based vision systems<\/strong><\/h3>\n\n\n\n<p>The AI for machine vision market is expected to <span style=\"text-decoration: underline;\"><a href=\"https:\/\/www.marketsandmarkets.com\/Market-Reports\/ai-in-computer-vision-market-141658064.html\" target=\"_blank\" rel=\"noopener\">reach $25B by 2023<\/a><\/span> with a CAGR of 26.3% (source <a href=\"https:\/\/www.marketsandmarkets.com\/\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">MarketsandMarkets<\/span><\/a>). \u00a0This market consists of industry use cases such as kitting, sorting, picking, shopfloor safety, throughput analysis, quality inspection, and many more. \u00a0For instance, vision systems utilize objection detection algorithms to automatically recognize the position of objects and guide a robot to pick them up. <\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Robot-Bin-Picking-1024x576.png\" alt=\"Robot Bin Picking\" class=\"wp-image-5743\" width=\"768\" height=\"432\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Robot-Bin-Picking-1024x576.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Robot-Bin-Picking-600x338.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Robot-Bin-Picking-768x432.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Robot-Bin-Picking-1536x864.png 1536w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Robot-Bin-Picking-900x506.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Robot-Bin-Picking.png 1920w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/><figcaption>A robot picking up metal parts from a bin based on camera input.<\/figcaption><\/figure><\/div>\n\n\n<h3 class=\"wp-block-heading\"><strong>What are the steps for object detection?<\/strong><\/h3>\n\n\n\n<p>To understand the potential of synthetic data, let&#8217;s review the workflow of deploying a typical object detection vision system:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"429\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Computer-Vision-Object-Detection-Workflow-1024x429.png\" alt=\"Computer Vision Object Detection Workflow\" class=\"wp-image-5750\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Computer-Vision-Object-Detection-Workflow-1024x429.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Computer-Vision-Object-Detection-Workflow-600x251.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Computer-Vision-Object-Detection-Workflow-768x322.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Computer-Vision-Object-Detection-Workflow-1536x643.png 1536w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Computer-Vision-Object-Detection-Workflow-900x377.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Computer-Vision-Object-Detection-Workflow.png 1868w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Typical workflow for the deployment of vision-based object detection systems.<\/figcaption><\/figure><\/div>\n\n\n<p>Synthetic data can help with shortening this workflow and making it more robust by addressing some of the pain points in the data collection and annotation stages:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Data Collection<\/strong> \u2013 Theoretically, an infinite amount of synthetic data can be made available without having to set up the physical environment. &nbsp;This is especially beneficial for data-constrained scenarios, i.e., where the amount of real data that can be collected is limited to non-existing, or that it is very hard to obtain. &nbsp;For instance, if an existing manufacturing line must be stopped to collect the training data it could incur potential production losses.  Synthetic data can also provide a much larger variation than the one typically observed when collecting real data. &nbsp;For instance, in a virtual 3D environment it is easy to create varying light or other physical conditions while in the real environment there is generally limited control over these parameters. &nbsp;Thus, utilizing synthetic data can improve the machine learning model&#8217;s ability to generalize well when deployed in environments that it has not encountered before.<br><\/li><li><strong>Annotation<\/strong> \u2013 Manually annotating data is often regarded as a repetitive, mundane task. &nbsp;Or, as it was phrased in a recent article by <span style=\"text-decoration: underline;\"><a href=\"https:\/\/research.google\/pubs\/pub49953\/\" target=\"_blank\" rel=\"noopener\">Google Research: &#8220;<em>Everyone wants to do the model work, not the data work<\/em>&#8220;<\/a><\/span>. &nbsp;Often, the human workforce that is annotating the objects lack domain expertise or proper guidance and this leads to non-exact or simply wrong annotations.&nbsp; On the other hand, synthetic data is always accurately annotated, as the annotations (bounding boxes, object contours, etc.) are generated automatically based on complete knowledge of how the synthetic data was formed. &nbsp;This reduces annotation errors that are typical in manual annotation projects.<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Bridging the gap between synthetic and real<\/strong><\/h3>\n\n\n\n<p>While 3D CAD and simulation tools have been well-established for a long time, recent advancements have made significant progress in transferring capabilities learned in simulation to reality. \u00a0Those computer vision techniques are commonly referred to as &#8220;Sim2Real&#8221;.<\/p>\n\n\n\n<p>There are a few existing methodologies for generating synthetic data that can train machine learning models to perform well when fed real data.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"429\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-1024x429.png\" alt=\"Domain Randomization\" class=\"wp-image-5751\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-1024x429.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-600x251.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-768x322.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-1536x643.png 1536w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-900x377.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization.png 1868w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Synthetic data generation methods fall somewhere on the scale between \u201cclose to real\u201d and \u201cdomain randomization\u201d.<\/figcaption><\/figure><\/div>\n\n\n<p>All these methodologies fall somewhere on the scale between <strong>Close to Real<\/strong> simulation and <strong>Domain Randomization<\/strong>.<br><strong><br>Close to Real<\/strong> \u2013 In this approach, you invest your effort in trying to make the simulation as close as possible to the real expected scenario.&nbsp; Considering a bottle packing line let\u2019s assume you need to perform an automatic vision-based counting of the bottles before capping and shipping them. Some properties are already known before you begin generating the synthetic data:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Camera properties \u2013 exact location, field of view, resolution, etc.<\/li><li>Lighting conditions<\/li><li>Bottles and surrounding material properties \u2013 colors, textures, reflection, refraction, transparency, etc.<\/li><li>Possible positions of the bottles in the test station<\/li><li>Typical noise or artifacts generated due to the camera\u2019s optical and electronic properties<\/li><\/ul>\n\n\n\n<p>Given some of these properties, you can manually create a <span style=\"text-decoration: underline;\"><a href=\"http:\/\/siemens.com\/tecnomatix\" target=\"_blank\" rel=\"noopener\">3D simulated<\/a><\/span> scene that mimics many of them.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Pros:<\/strong><ul><li>The trained machine learning model is likely to perform well in highly similar scenarios.<br><\/li><\/ul><\/li><li><strong>Cons:<\/strong><ul><li>Sensitive to changes and perturbations.<\/li><li>Requires higher effort to accurately simulate the scene.<\/li><li>Harder to automate or re-use in other scenarios that have even slight variance.<br><\/li><\/ul><\/li><\/ul>\n\n\n\n<p><strong>Domain Randomization (DR)<\/strong> \u2013 Here you randomize many of the environment properties, from number of objects and their locations, to material properties, camera properties, surrounding environment, etc.  When training a machine learning model based on such a randomized dataset, the resulting trained model will know how to ignore the properties that are randomized and focus on the ones that are not (such as the part geometry). This way, the trained model can generalize to various environments and domains, including the real expected environment.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Pros:<\/strong><ul><li>Can be automated easily.<\/li><\/ul><ul><li>Spares precious engineering time.<\/li><\/ul><ul><li>Less sensitive to environment changes.<br><\/li><\/ul><\/li><li><strong>Cons:<\/strong><ul><li>Requires more data since the randomization causes higher variance (more options for how the environment might look).<\/li><li>In some cases, the machine learning model will not be able to perform well enough in the real environment and will require some manual adjustment, e.g., set the camera location, field of view, image resolution, and object texture.<br><\/li><\/ul><\/li><\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"590\" height=\"651\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-Example.png\" alt=\"Domain Randomization Example\" class=\"wp-image-5752\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-Example.png 590w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Randomization-Example-544x600.png 544w\" sizes=\"auto, (max-width: 590px) 100vw, 590px\" \/><figcaption>Domain randomization example \u2013 object location, appearance, and light conditions are being randomized so that the model learns to ignore those properties and focus on the geometry.  Image from Fangyi Zhang et al. &#8211; Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies<\/figcaption><\/figure><\/div>\n\n\n<ul class=\"wp-block-list\"><li><strong>Fine-tuning <\/strong>\u2013 A technique we use to take a machine learning model that was previously trained on some dataset for a specific task, and continue to train it on a different dataset, possibly with different parameters and for a different task.  After training models purely on synthetic data, sometimes the model can immediately perform well enough with real data. &nbsp;In some cases, depending on the environment and task, the machine learning model may require some fine-tuning using a small number of real (usually annotated) images before it can perform well.<br><\/li><li><strong>Domain adaptation (DA) <\/strong>\u2013 The ability to apply an algorithm trained in one or more &#8220;source domains&#8221; to different (but related) &#8220;target domains\u201d. &nbsp;In our case, the synthetic dataset is our source domain, and we want to train a model to perform well in real life.<br><\/li><\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"429\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Shift-1024x429.png\" alt=\"Domain Shift\" class=\"wp-image-5753\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Shift-1024x429.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Shift-600x251.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Shift-768x322.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Shift-1536x643.png 1536w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Shift-900x377.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Domain-Shift.png 1868w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>\u201cDomain shift\u201d techniques help close the gap between dataset source domains and different but related target domains.<\/figcaption><\/figure><\/div>\n\n\n<p>There are several techniques to close this gap (often called \u201cdomain shift\u201d). &nbsp;Some techniques use <a href=\"https:\/\/en.wikipedia.org\/wiki\/Generative_adversarial_network\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">GANs<\/span><\/a> to generate images that appear closer to the target domain. &nbsp;Other methods (like <a href=\"https:\/\/arxiv.org\/abs\/2103.16563\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">this one<\/span><\/a>) use derivative based methods to generate realistic images. &nbsp;Generally, DA is a wide and fascinating field of research. If we piqued your curiosity, then <a href=\"https:\/\/towardsdatascience.com\/understanding-domain-adaptation-5baa723ac71f\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">this blog post<\/span><\/a> can be a nice start.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The challenges of adopting synthetic data for industrial use cases<\/strong><\/h3>\n\n\n\n<p>You can use game engines or simulators, like Blender, Unity3D, Unreal, Gazebo or others, and create a custom 3D simulation for the purpose of generating synthetic annotated datasets. &nbsp;Typically, to achieve your goal using those tools would require specific expertise and knowledge in 3D environments and programming. &nbsp;You need to know how to create your scene, create variance (randomization) between different images, adjust your virtual camera and other sensors, and finally create the images, annotated in the required format.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"429\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Data-Expertise-1024x429.png\" alt=\"Synthetic Data Expertise\" class=\"wp-image-5748\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Data-Expertise-1024x429.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Data-Expertise-600x251.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Data-Expertise-768x322.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Data-Expertise-1536x643.png 1536w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Data-Expertise-900x377.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Data-Expertise.png 1868w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Engineers with a high variety of expertise required for using some of the common tools are rarely found.<\/figcaption><\/figure><\/div>\n\n\n<p>Besides the expertise required, this process, like any other engineering or development process, takes time. &nbsp;Especially if you choose to model a close-to-real simulation. &nbsp;This can often be extremely time consuming. &nbsp;Sometimes even up to a point where the effort to create the simulation is much higher than the effort to manually collect and annotate the real data.<\/p>\n\n\n\n<p>Finally, even if you choose to create the dataset yourself, you need to create and train it using the correct methodology, in the context of domain randomization, and fine-tuning. For engineers who are not experienced in such methodologies, the training results can be sub-optimal.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How SynthAI\u2122 software helps<\/strong><\/h3>\n\n\n\n<p><a href=\"https:\/\/synth.ai.sws.siemens.com\/?utm_campaign=blog_synthai\" target=\"_blank\" rel=\"noopener\"><span style=\"text-decoration: underline;\">SynthAI<\/span><\/a> is a new online service from Siemens Digital Industries Software that is aimed to solve exactly those challenges!<br>&#8212;&#8211;<\/p>\n\n\n\n<div class=\"wp-block-cover alignfull is-light\"><span aria-hidden=\"true\" class=\"wp-block-cover__background has-background-dim-100 has-background-dim has-background-gradient\" style=\"background:linear-gradient(135deg,rgb(238,238,238) 31%,rgb(169,184,195) 100%)\"><\/span><div class=\"wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow\">\n<div class=\"wp-block-media-text alignwide is-stacked-on-mobile is-vertically-aligned-bottom is-image-fill\" style=\"grid-template-columns:52% auto\"><figure class=\"wp-block-media-text__media\" style=\"background-image:url(https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/08\/SynthAI-Images.png);background-position:50% 50%\"><img loading=\"lazy\" decoding=\"async\" width=\"965\" height=\"720\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/08\/SynthAI-Images.png\" alt=\"SynthAI-Images\" class=\"wp-image-7064 size-full\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/08\/SynthAI-Images.png 965w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/08\/SynthAI-Images-600x448.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/08\/SynthAI-Images-768x573.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/08\/SynthAI-Images-900x672.png 900w\" sizes=\"auto, (max-width: 965px) 100vw, 965px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<h3 class=\"has-text-color wp-block-heading\" style=\"color:#000000;font-size:32px\"><a href=\"https:\/\/synth.ai.sws.siemens.com\/?utm_campaign=blog_synthai\" target=\"_blank\" rel=\"noreferrer noopener\">Request SynthAI Early Access<\/a><\/h3>\n\n\n\n<p class=\"has-text-color\" style=\"color:#000000;font-size:17px\">Use synthetic data to accelerate machine learning quickly and easily for vision-based automation systems.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-fill\"><a class=\"wp-block-button__link\" href=\"https:\/\/synth.ai.sws.siemens.com\/?utm_campaign=blog_synthai\" target=\"_blank\" rel=\"noreferrer noopener\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-white-color\">Get started now<\/mark><\/strong><\/a><\/div>\n<\/div>\n<\/div><\/div>\n<\/div><\/div>\n\n\n\n<p>&#8212;&#8211;<br>To start using SynthAI you only need to provide a CAD file of your product. &nbsp;That&#8217;s it.&nbsp; No more taking hundreds of image samples, no more hours on end of tedious annotation work or tweaking the parameters on your machine learning model. &nbsp;Just upload your CAD file and you&#8217;re all set.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-CAD-Preview-2.png\" alt=\"SynthAI CAD Preview\" class=\"wp-image-5768\" width=\"519\" height=\"614\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-CAD-Preview-2.png 692w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-CAD-Preview-2-507x600.png 507w\" sizes=\"auto, (max-width: 519px) 100vw, 519px\" \/><figcaption>SynthAI only needs a CAD model to start the synthetic data creation process.<\/figcaption><\/figure><\/div>\n\n\n<p>After you upload the CAD file of your product and start the training process, SynthAI will automatically generate thousands of randomized annotated synthetic images within minutes.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"429\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Images-1024x429.png\" alt=\"Synthetic Images\" class=\"wp-image-5749\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Images-1024x429.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Images-600x251.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Images-768x322.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Images-1536x643.png 1536w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Images-900x377.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/Synthetic-Images.png 1868w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>SynthAI automatically generates thousands of synthetic data images in minutes.<\/figcaption><\/figure><\/div>\n\n\n<p>But it won&#8217;t stop there \u2013 SynthAI will also automatically train a machine learning model that could be used to detect your product in real life.<\/p>\n\n\n\n<p>Once the training is done, you can download the trained model so that you could test and deploy it offline.&nbsp; If you&#8217;d like to handle the training on your own, feel free to do so \u2013 download the complete synthetic images dataset together with the annotations and run your own training.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Download-2.png\" alt=\"SynthAI Download\" class=\"wp-image-5771\" width=\"746\" height=\"566\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Download-2.png 994w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Download-2-600x455.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Download-2-768x583.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Download-2-900x683.png 900w\" sizes=\"auto, (max-width: 746px) 100vw, 746px\" \/><figcaption>SynthAI creates both a trained model and a synthetic image dataset for download.<\/figcaption><\/figure><\/div>\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button aligncenter has-custom-font-size has-medium-font-size\"><a class=\"wp-block-button__link has-gray-dark-color has-cyan-background-color has-text-color has-background\" href=\"https:\/\/synth.ai.sws.siemens.com\/?utm_campaign=blog_synthai\" style=\"border-radius:100px\" target=\"_blank\" rel=\"noopener\">Register for early access<\/a><\/div>\n<\/div>\n\n\n\n<p>Integrating the trained model into your own project is just a few lines of code away \u2013 the downloaded model comes with a complete python environment setup and sample code that lets you quickly and easily use the model to detect the trained product in your images.<\/p>\n\n\n\n<p>Your model doesn&#8217;t detect the product as expected on real images? &nbsp;You can improve its accuracy by uploading and annotating just a few real images. &nbsp;You can annotate your real images for object detection (bounding boxes) or instance segmentation (object contours) and then fine-tune your model to become much more accurate.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"625\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Improvement-2-1024x625.png\" alt=\"SynthAI Image Improvement\" class=\"wp-image-5773\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Improvement-2-1024x625.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Improvement-2-600x366.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Improvement-2-768x468.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Improvement-2-900x549.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Improvement-2.png 1046w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Machine learning model accuracy can be improved by uploading and annotating a few real images.<\/figcaption><\/figure><\/div>\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"429\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Annotations-1024x429.png\" alt=\"SynthAI Image Annotations\" class=\"wp-image-5746\" srcset=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Annotations-1024x429.png 1024w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Annotations-600x251.png 600w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Annotations-768x322.png 768w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Annotations-1536x643.png 1536w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Annotations-900x377.png 900w, https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Image-Annotations.png 1868w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Object detection bounding box annotations (on the left) and instance segmentation contour annotations (on the right).<\/figcaption><\/figure><\/div>\n\n\n<p><span style=\"text-decoration: underline;\"><a href=\"https:\/\/synth.ai.sws.siemens.com\/\" target=\"_blank\" rel=\"noopener\">SynthAI<\/a><\/span> is a solution still in the making and we are constantly adding features and capabilities and improving its ability to generate high-quality synthetic images and trained models.<\/p>\n\n\n\n<p>Watch this short video to see how SynthAI can be utilized to train and deploy a robotic product picking scenario:<\/p>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video controls controlsList=\"nodownload\" src=\"https:\/\/videos.mentor-cdn.com\/mgc\/videos\/5400\/88ca7766-9fc3-4358-834f-f7f0d8c0117e-en-US-video.mp4\"><\/video><\/figure>\n\n\n\n<p>&#8212;&#8211;<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button aligncenter has-custom-font-size\" style=\"font-size:28px\"><a class=\"wp-block-button__link has-gray-dark-color has-cyan-background-color has-text-color has-background\" href=\"https:\/\/synth.ai.sws.siemens.com\/?utm_campaign=blog_synthai\" target=\"_blank\" rel=\"noopener\">Register for early access<\/a><\/div>\n<\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Use synthetic data to accelerate machine learning quickly and easily for vision-based automation systems Synthetic data is about to transform&#8230;<\/p>\n","protected":false},"author":56188,"featured_media":5797,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spanish_translation":"","french_translation":"","german_translation":"","italian_translation":"","polish_translation":"","japanese_translation":"","chinese_translation":"","footnotes":""},"categories":[1],"tags":[530,531,6139],"industry":[],"product":[],"coauthors":[2032],"class_list":["post-5721","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-artificial-intelligence","tag-digital-manufacturing","tag-machine-learning"],"featured_image_url":"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/7\/2022\/01\/SynthAI-Images.png","_links":{"self":[{"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/posts\/5721","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/users\/56188"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/comments?post=5721"}],"version-history":[{"count":5,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/posts\/5721\/revisions"}],"predecessor-version":[{"id":7123,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/posts\/5721\/revisions\/7123"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/media\/5797"}],"wp:attachment":[{"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/media?parent=5721"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/categories?post=5721"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/tags?post=5721"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/industry?post=5721"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/product?post=5721"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/tecnomatix\/wp-json\/wp\/v2\/coauthors?post=5721"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}