Different effects can be obtained by changing the values of the “ NeighborCount” and “ MaxSamples” suboptions. Now I can extrapolate the painting using the “ TextureSynthesis” method. Next, I need to extend the image by padding it with white pixels to generate the inpainting region. First, I import the painting and remove the border. Let’s go back to the extrapolation of Van Gogh’s painting. The first parameter defines the number of nearby pixels used for texture comparison, and the second parameter specifies the maximum number of samples used to find the best-fit texture. Garber in “ Computational Models for Texture Analysis and Texture Synthesis.” Parameters for the “ TextureSynthesis” algorithm can be specified via two suboptions: “ NeighborCount” (default: 30) and “ MaxSamples” (default: 300). This algorithm is an enhanced best-fit approach introduced in 1981 by D. The “ TextureSynthesis” method is based on the algorithm described in “ Image Texture Tools,” a PhD thesis by P. In the example below, it is clearly visible that these properties of the “ TextureSynthesis” algorithm make it the method of choice for removing large objects from an image. In other words, each inpainted pixel value is taken from the parts of the input image that correspond to zero elements in the region argument. “ TextureSynthesis,” in contrast to other algorithms, does not operate separately on each color channel and it does not introduce any new pixel values. There are five different algorithms available in Inpaint that one can select using the Method option: “ Diffusion,” “ TotalVariation,” “ FastMarching,” “ NavierStokes,” and “ TextureSynthesis” (default setting). The region to be inpainted (or retouched) can be given as an image, a graphics object, or a matrix. In the Wolfram Language, Inpaint is a built-in function.
![starry night pro trial starry night pro trial](https://telltaletv.com/wp-content/uploads/2019/10/Screen-Shot-2019-10-17-at-10.39.41-PM-e1571853003959.png)
However, it is also widely used to remove or replace selected objects. The main goal of inpainting is to restore damaged parts in an image. The term “digital inpainting” was first introduced in the “ Image Inpainting” article at the SIGGRAPH 2000 conference.
![starry night pro trial starry night pro trial](https://3.bp.blogspot.com/-oUIPiDG-Pwk/TpO45iW7PdI/AAAAAAAAAOA/GWh4Bht1PlQ/s1600/90210-season-5.jpg)
An inpainting algorithm called PatchMatch was used to create the machine art, and in this post I will show how one can obtain similar effects using the Wolfram Language. Readers can view this and similar computer-extended images at Gal’s website Extrapolated Art. Recently the Department of Engineering at the University of Cambridge announced the winners of the annual photography competition, “The Art of Engineering: Images from the Frontiers of Technology.” The second prize went to Yarin Gal, a PhD student in the Machine Learning group, for his extrapolation of Van Gogh’s painting Starry Night, shown above. Second prize in the ZEISS photography competition Can computers learn to paint like Van Gogh? To some extent-definitely yes! For that, akin to human imitation artists, an algorithm should first be fed the original artists’ creations, and then it will be able to generate a machine take on them.