A Study on Digital Low Poly Modeling Methods as an Abstraction Tool in Design Processes

Low Poly Modeling, as one of the most common abstraction methods, initially emerged to maximize the efficiency of the digital modeling process. Besides decreasing the file size, novel designs may develop by simplifying the 3D objects by low poly modeling. There are various simplification methods via different software. In this paper, polygon reduction algorithms named (i) Decimate-Collapse in Blender 2.80, (ii) ProOptimizer in 3dsMax 2019, and (iii) "Clustering Decimation" in MeshLab 2019 are compared through sphere geometry to understand the potential of low poly modeling. Low poly sphere models in different levels of detail are produced from each algorithm. The comparison criteria are (i) geometric change/level of fidelity, (ii) number of polygons, and (iii) file size. Accordingly, algorithms are evaluated within and compared with each other. As a result, 3ds Max ProOptimizer comes forward as the most efficient tool to reduce the file size. Therefore, it is efficient for saving time. Blender Decimate-Collapse is more efficient at preserving the geometry to low levels of detail, so it is the best abstraction method of fidelity. On the other hand, MeshLab Clustering Decimation is more efficient at creating a new form without losing fidelity, but it cannot produce models in every level of detail. As a future study, MeshLab Clustering Decimation has potential in artificial intelligence studies in designing new objects.


Introduction
When design processes are examined in various fields, it is observed that abstraction methods are used to express an object visually and formally. Furthermore, abstraction also saves time for the practitioner since it simplifies the object as a form. In addition, a shape created by abstraction can be used in the form of a new design. Today, 3D modeling techniques in the digital environment are commonly used to visualize the design. In that area, Low Poly Mesh Modeling methods are efficient in abstraction in speed of work, polygon reduction, and originality.
Furthermore, a low poly model is a simplified version of an object in an analog or a digital environment because it contains fewer data.
In this study, these two areas are dealt with sequentially. In the subject of 3D mesh modeling, Low Poly Mesh Modeling methods and other studies examined the simplification of the model by decreasing the count of polygons. With the findings and inferences revealed with the field study, the aim is to contribute to 3D software developers, users, and scientific literature. In this study, these two areas are dealt with sequentially. In the subject of 3D mesh modeling, Low Poly Mesh Modeling methods and other research are examined the simplification of the model by reducing the number of polygons.
Within this framework, low-poly mesh modeling processes produced in a digital environment are evaluated over the polygonal geometric abstraction used in the design process. First, the concepts of fidelity, creativity, and function are used to evaluate polygonal triangular abstraction. Later, low poly modeling tools and methods are used. One of these methods, Automatic Polygon Reduction Algorithms, has been discussed in general terms. The operations that make these operations are shown, especially three of them; Edge Collapse, Vertex Removal, and Cell Collapse, and their algorithmic principles are explained in more detail.
In the Case Study, research was carried out on sphere geometry with three software algorithms using the operations described above. First, the method is described as the production and comparison of 3D models at different levels of detail, and the evaluation criteria are determined as (i) level of fidelity, (ii) geometric change, (iii) polygon count, and (iv) file size. The comparison is made using the Decimate-Collapse algorithm in Blender 2.80 software, the ProOptimizer algorithm in 3ds Max 2019, and the Clustering Decimation algorithm in MeshLab 2019 software. Fifteen low poly models were modeled from the sphere geometry in each software. The data obtained from these models are presented visually and numerically in tables. Using these data, comparisons have been made over within itself and other algorithms of the same level of detail. Finally, the results of the evaluations made in the case study are explained, and the results and suggestions are presented.

Abstraction
Abstraction, a concept used in many fields such as art, design, and literature, is defined as a cognitive isolation process, a common feature or relationship observed in many things, according to the Britannica Encyclopedia [1]. When these definitions are interpreted in a formal expression, abstraction can be explained as expressing an object in the simplest possible form. When the studies and artifacts using cognitive approach are observed [2,3], it is seen that the term abstraction is used not only to obtain the simplest state of the object but also for steps towards simplification. The formal data that people use to describe the objects around them can be used to reveal the visual expression of the object more abstractly.

Fidelity and Function in Abstraction
For an abstraction study to be successful, it must be similar to the object it expresses and be able to define it [4]. The concept of "level of fidelity" is used to describe the degree of resemblance to the original form of abstraction. According to Gunay [4], accuracy describes the similarity of an item used in a work of art to the object it expresses. Pierce [5], an expert in semiotics, explains the concept of accuracy as a sign which should verify its referent. In other words, if that meaning or purpose can be observed when looking at an object designed for a specific purpose or by attributing meaning to it, it is accepted that "fidelity" occurs [5].
According to Gunay [4], a work of art also means a re-presentation of images in the real world. In this re-presentation, how reality is reflected varies [5].

Polygonal and Triangular Abstraction
The shapes defined as "polyhedron" and "polytope" in geometry correspond to the definition described forms made by polygonal abstraction. A polytope is a geometric object with flat or planar edges and surfaces. It is the general form of a three-dimensional polyhedron in any dimension. A polygon is a 2D polytope, and a polyhedron is a 3D polytope [5]. As will be understood, a polytope can have an open or closed surface. Therefore, it does not have to be rigid. A polyhedron is a 3D solid with planar polygonal faces, planar edges, and sharp corners, and the faces are fully welded to each other [7]. To describe a polyhedron using the definition of a polytope, it is the 3D solid-state of a polytope, which is a more general definition in any dimension [8]. The concept of polygonal abstraction in painting is frequently encountered in the "Cubism" and "Abstract Art" movements at the beginning of the 20th century ( Figure 1).

Low Poly Mesh Modeling
"Low Poly Modeling" methods are used to create versions of the model with even lower detail levels [12]. In the literature, the less polygon number of two models of the same object is generally defined as low poly [13]. In addition, methods of reducing the number of polygons by preserving the geometry of a model are also included in this modeling definition [14]. Furthermore, with Low Poly Modeling, smaller file sizes can be processed [15]. Almost all 3D modeling software includes tools that help manually low poly modeling. Software such as Blender, 3ds Max, Maya, Cinema 4D, Rhinoceros, MeshLab, Lightwave 3D, Wings 3D, Silo 3D, Milkshape 3D, K-3d, and Seamless 3D provide these capabilities.  Lichtenstein (1980) [9]. Cubism: (top left) "Las Meninas" Diego Velá zquez (1656) [10]; (bottom left, top right, bottom right) "Meninas", Picasso (1957) [11] It was observed that the definition of "Low Poly Model" (LPM) in the digital environment was mostly used for the version of a 2D or 3D object that visually exists with less polygons but with a similar-close or reminiscent geometry to the original. In addition, it is possible to use the exact definition today for objects modeled as low poly from the beginning as a design, style, or preference.
There are different options in these methods such as manual modeling from scratch, manual polygon reduction, or usage of global polygon reduction algorithms. However, in this study regional polygon reduction algorithms are focused on because of their similarities to analog abstraction tools.

Polygon Reduction (PR)
The concepts of polygonal simplification, mesh simplification, or polygon reduction means converting a 3D polygonal model into a more simple topological version. While reducing the number of polygons, an attempt is made to preserve as much geometry as possible [16].
For each object, the number of polygons enough to provide its formal expression is sufficient. The need for polygonal simplification arises in models with more than this number of polygons. The aim here is to optimize the number of polygons or reduce the expression enough to keep it. It is also referred to as polygon reduction since its action in practice decreases the number of polygons.
The process of reducing the count of polygons of a 3D mesh model can be done automatically with some tools instead of manually [17]. These tools are algorithms that perform automatic polygon reduction. By changing the parameters of the algorithm, low poly models can be produced at different levels of detail. With the algorithms available in 3D modeling software, they can also be used through the software via a plug-in or coding. In addition, it is possible to use these algorithms through some simple applications that are prepared with a focus on the reconstruction of the lattice model. This process can even be done on the internet. For example, a trial version of the PR application named Polygon Mesh Processing Library can be accessed and used online on a website. The mesh simplification algorithm, which can be installed as an extension for Unity 3D software, can also perform the PR operation in this software.
These algorithms are categorized under different titles in various sources. When they are included in a 3D modeling software, they are referred to by the name of the tool or feature that the software offers to the user. For this reason, in the following parts of this study, they are not called the name of the type that they belong to, but the name of the feature or tool providing access to it in the software when the algorithms in the software are mentioned. For example, the automatic PR algorithm, which belongs to the Point Clustering algorithm type and is accessed through the Clustering Decimation simplification filter in MeshLab software, is called Clustering Decimation [18]. The PR algorithm, which is observed to act as the logic of the Vertex Decimation algorithm and is available in 3dsMax software, is called ProOptimizer [19]. Today, many PR algorithms work with different methods. Luebke [17] divided these algorithms as Vertex Clustering, Vertex Decimation, Quadric Error Metric, Rsimp: Reverse Simplification, Image Driven Simplification, Skip Strip, and Triangulation of Polygonal Models.

Low Poly Model Production Processes of Polygon Reduction Algorithms
One of the approaches of creating a low poly 3D model is to make a model considered high poly into low poly by reducing the number of polygons. As a method, the facilitating manual tools and features offered by software can be used, and it can be done with algorithms that offer automated solutions. When the regional polygon reduction algorithms and the PR processes made with them are examined and observed, it is determined that there include different stages. Although these stages are not included in every algorithm, they are used in at least one type of algorithm in studies encountered in literature research. It has been observed that basically, the part where the operation is done is the operation [17]. However, it has been observed that these operations need additional steps. Calculations are required to determine the edges or corners to be processed beforehand and then to minimize the differences in the new geometry.
It is possible to divide the Regional Polygon Reduction Operations into: (i) Edge Collapse, (ii) Vertex-Pair Collapse, (iii) Triangle Collapse, (iv) Vertex Removal, (v) Cell Collapse, (vi) Polygon Merging, and (vii) General Geometric Replacement" [17]. When these operations are examined, types i, iv, and v have been observed to have basic transactions and other operations are derived from them. Kobbelt et al. [20] also support this observation. Therefore, these three operations were studied as research subjects.

Regional Polygon Reduction Algorithms-Edge
Collapse (EC) In Edge Collapse operation, two points will be selected before the operation depends on different preferences in various algorithms. For example, the edges in the regions closest to the planar geometry can be selected as the priority order [21]. The location of the new point depends on the error measurement process of the algorithm to which the operation belongs. This process tries to preserve this curvature of a curvilinear form while attempting to express it with straight lines. In more general terms, it can be said that the aim is to preserve the geometry as much as possible. In Figure 2, on the 3D view, the EC operation is visualized. It is seen that the new point is placed in a different plane and moves away from the old plane in a direction that provides preserving the geometry.

Regional Polygon Reduction Algorithms-Vertex
Removal (VR) and Half-Edge Collapse (HEC) One of the polygon reduction operations working by destroying adjacent vertices, edges, and faces is called Vertex Removal, first defined by Schroeder [22]. Then, using this operation, Vertex Decimation algorithms are created [17].
The Vertex Removal operation can be briefly described as follows: The point selected by the algorithm is deleted together with itself and its adjacent edges and faces. A hole is formed because of this wiping process. Later, this hole is closed using a triangulation method. This triangulation is done by creating new triangular trusses using the corners of the formed hole ( Figure 2).

Regional Polygon Reduction Algorithms-Cell
Collapse (CC) CC Operation divides a 3D model into 3D cells equal in volume or size. Then, in the cells where the points of the model coincide, they remove the old ones and create a new point. This point is located in the same or a new place with one of the old points, according to the algorithm of the operation. It creates new edges and faces using the data structure of the old model from the new points formed in the cells. Thus, it creates a model with a lower polygon count ( Figure 2). This operation may vary according to the new topology and geometry of the products, the type/size of the cells, and which point will be selected in the cell. Another criterion is whether the new point will keep its old place or not. The location of the new point is seen as one of the crucial factors determining the shape.

Aim, Content, and Restrictions of the Study
The case study was conducted using three of the regional polygon reduction algorithms. These algorithms were compared both within themselves and among each other over 15 low poly models with different levels of detail. 3D modeling software and versions such as Blender 2.80, 3ds Max 2019, and MeshLab 2019 were used. All modeling was made on a 3D mesh model in triangular sphere geometry with a polygon number of 1280. Windows 10 operating system was used in these applications.
This research aims to contribute to the studies in the literature on the use of polygon reduction algorithms, which is one of the low poly model generation methods. For this purpose, algorithms using three operations with different working principles have been compared with each other. Unlike the complex 3D models used in previous studies, the aim here was to study the sphere object using a simpler geometry and topology to make these algorithms and operations easier to understand. Simple and more accessible programs were preferred as much as possible (Blender, MeshLab).
When the studies in the literature on automatic polygon reduction algorithms and operations are examined, comparisons are generally made on file sizes, polygon/face/edge/vertex count, and conservation of the original geometry. In this study, the algorithm and its operations are evaluated by adding geometric changes to the above criteria. These are also handled as a design abstraction tool different from other works. The concept of fidelity that is used to grade the similarity of the shape obtained by abstracting an object in a design process to its original form is also examined in this study. In the evaluations and field research about the literature in the study, an attempt is made to determine the geometry, level of fidelity, file size, and polygon count changes. In addition, term suggestions have been made for the certain stages of the level of fidelity in abstraction that are described relatively in the literature and for the thresholds that are not defined precisely. An attempt was made to obtain data and inferences from the field research about using automatic polygon reduction processes in the form search phase in a design process, which has not been mentioned before in the literature.

Hypotheses
In the study, two main hypotheses are verified:  In the field of abstraction in design, there are similarities between analog and digital methods. Among the polygon reduction algorithms that produce a low poly model and analog abstraction methods, the working methods and the results they produce are determined through the products. Changes in geometry also change the degree and performance of abstraction.
 Geometric variation, level of fidelity, polygon count, and file size significantly affect the performance of a polygon reduction algorithm. Algorithms provide different efficiencies as a result of changes in these criteria. Each algorithm has a different file size, number of polygons, geometry, or level of fidelity at the same level of detail.

Material and Method
In the case study, after the research and observations made in the low poly 3D modeling area, polygon reduction algorithms (PRA) belonging to different 3D modeling software in the digital environment were applied to a pre-determined 3D geometry, and comparisons were made on the data obtained from the resulting low poly models (LPM). In this way, evaluations were made on both the software and the automatic polygon reduction (PR) algorithms they use and their operations.
In the case study, the three selected software and their PRAs were compared by using a specific basic geometry. While the number of polygons of this basic geometry decreased, the file size, level of fidelity, and changes in geometry were examined. On the 3D sphere model with PC 1280, the following operations were compared: (i) Edge Collapse operation using the Decimate-Collapse (DC) algorithm in Blender 2.80, (ii) Vertex Removal / half-edge collapse operations using the ProOptimizer algorithm in 3ds Max 2019, and (iii) MeshLab 2019 using the Clustering Decimation algorithm, the cell collapse operations were applied separately. In each software application, 15 variations of the sphere were modeled. These 15 final products from each operation were listed together with their original form, and three tables were created. Variations are sequentially listed from the high-poly model to the low-poly model, as shown in Tables 2, 3, and 4. These models were then compared with each other.

Regional Polygonal Reduction Algorithms and Operations on 3D Modeling Software
In the case study, different 3D modeling software that include PRAs were studied. First of all, 3D modeling software used for creating 3D objects in the design processes of different sectors today and in the past were examined. As mentioned before, most of the 3D CAD software includes automatic polygonal simplification algorithms. The scope of the study is limited by the algorithms available in the software. PR algorithms that can be loaded and used as plug-ins or run with coding software are not included in this study.
For this reason, what is meant by software that is a research area is the version that does not have a new plug-in installed on it later. It is also possible to define this situation as the first version of the software installed on the computer. In addition, the names of the software versions that are researched are written next to their names in the table. Later versions are not included in this research.
In Table 1, examples of 3D software that exist today were examined and experienced before the study and their PR algorithms/operations are given.
Three software have different default PA algorithms that they host (not later loaded by the user). These PR tools were used in the field research. Along with the algorithms hosted by the tools, the focus was also on the PR operations within the algorithm because the operation is the step where the number of polygons is reduced in the process, that is, abstraction is made. While choosing the software used in the research, the algorithms and operations of the tools included in them were also considered. With this approach, it was decided to use Blender Decimate-Collapse, 3ds Max ProOptimizer, and MeshLab Clustering Decimation tools.

3D Sphere Polygon Mesh Model
Objects in different areas and types were examined during the decision-making phase of the 3D model to be worked on. In addition, it was noted which objects have been used in previous studies in this area. Among these models, it was decided to research the sphere shape because of being prime and its defined geometry. It has been observed that there are many ways to produce 3D sphere models in the digital environment. Different modeling methods are also used to model the sphere as a mesh object. It is possible to create a sphere mesh object by creating horizontal and vertical (latitude-longitude) curves that reveal squares or triangles by using their corner points and connect these polygons. Another method is to create a sphere by detailing the polygons by subdividing them using the geometry and mesh of the models called "platonic solids". In this way, many methods that reveal the sphere object were examined, and the sphere mesh model PC 1280, which was created with this second method, was used in the case study.
In all three applications, the "sphere" object has the same shape and topological feature. The objects were always viewed from the same angle and distance to have the same views given in the tables. To be able to compare the low poly sphere models to be created in these three software applications with each other correctly, the spheres prepared in 3ds Max and MeshLab software were rotated and brought to the same position as the spheres in the Blender software.

Criteria of Study
The "Level of Fidelity" (LOF), "Polygon Count" (PC), and "File Size" (FS) data of each model were used as the main criteria. The final model obtained due to the application of the algorithm and the geometry/topologies of the initial state were compared. The output models were observed to see how much they preserve the geometry and change the topology and LOF. How far the final model moved away from its original form and how much it can define its original form were also examined when looking at the final product. In this way, an information was obtained about at what level of detail the algorithms and operations are and the software can be used in the design abstraction process. For this purpose, by examining Tables 2, 3, and 4, it was determined at which polygon counts the algorithm and the operation successfully abstracted the original model, at which point the level of fidelity decreased and at which point fidelity could not be achieved. In addition, LODs in which algorithms generate a unique form without losing fidelity were defined.
Three software algorithms were compared based on the polygon count and file size of the models. The file sizes of the models with the same or close polygon count from different algorithms were compared. In this way, information was obtained about which algorithm and operation can produce a faster and easier low poly model (LPM).
In addition to these, different levels of detail of the icosahedron created with the subdivision method and the final LPMs with a polygon count close to the subdivided model were compared geometrically / topologically in LOF / FS criteria. In this way, it was revealed how preferable these final LPMs provided by the application were and the results were evaluated.

Comparison of 3D Models
By using the Blender Decimate-Collapse, 3ds Max ProOptimizer, and MeshLab Clustering Decimation algorithms, LPMs were obtained from 3D sphere mesh objects with 1280 PC, whose formation is explained in the upper headings. Their appearances, PC values, and FS quantities were recorded. Three tables were created by separating these data according to the algorithms used. The models and data that emerged in the studies carried out in Blender, 3Ds Max, and MeshLab software are shown respectively in Tables 2, 3, and 4.

Automatic Low Poly Model Generation with
Blender Decimate-Collapse Algorithm The 16 models created by using the "Decimate-Collapse" algorithm and included EC operation in the Blender software and shown in Table 2, were examined. It turns out that the algorithm can reduce a sphere mesh model with 1280 PC down to 384 PC while preserving the geometry and level of fidelity. Up to 64 PC can change the geometry and provide a low level of fidelity. Up to 16 PC can also change the geometry and produce LPM at a lower level of fidelity. In addition, after 16 PC, up to 2 PC can be output with the same features, but it can no longer be said that there is fidelity in abstraction. Based on this, when LPMs with 64, 32, and 16 PC are observed on their own, it is possible to say that they resemble the sphere object a bit and take on a new form. LPMs with 8, 4, and 2 PC, on the other hand, cannot be said to resemble the sphere object, although they have taken a new form due to the loss of fidelity.

Automatic Low Poly Model Generation with 3ds
Max ProOptimizer Algorithm 16 models created by using the "ProOptimizer" algorithm, which includes the VR operation in 3ds Max software as shown in Table 3, were examined. It turns out that the algorithm can reduce a sphere mesh model with 1280 PC down to 384 PC while preserving the geometry and level of fidelity. Up to 64 PC can change the geometry and provide a low level of fidelity. Up to 16 PC can also change the geometry and produce LPM at a lower level of fidelity. In addition, after 16 PC, up to 2 PC can be output with the same features, but it can no longer be said that there is fidelity in abstraction. Based on this, when LPMs with 64, 32, and 16 PC are observed independently, it is safe to say that they resemble the sphere object a bit and take on a new form. LPMs with 8, 4, and 2 PC, on the other hand, cannot be said to resemble the sphere object, although they have taken a new form since the fidelity has disappeared.

Automatic Low Poly Model Generation with MeshLab Clustering Decimation Algorithm
Before evaluating the 12 models created using the "Clustering Decimation" (CD) algorithm that has the "Cell Collapse" operation in the MeshLab software, it is necessary to talk about the application process. It has been seen that this method has different processes when compared to "Blender Decimate-Collapse" and "3ds Max ProOptimizer". During the operation phase of the algorithm, the vertices of the new model are created depending on the size of the cells used in the calculation. Therefore, it is not possible to do PR at every desired rate. As a result, the PC values of LPMs created with the MeshLab "Clustering Decimation" algorithm cannot be the same as those produced in the 3ds Max and Blender studies. In addition, with this method, even a model that approaches particular LODs cannot be created.  For this reason, when creating Table 4, among the low poly models (LPM) in Table 2 and Table 3, the closest ones to PC were selected. It is left blank for "LOD 4" because a model that is close to the PC was not created. Likewise, since there is no model at LOD 11, 14, and 15 levels, these levels are also left blank in the table. Table 4 illustrates that the "Clustering Decimation" algorithm in MeshLab software has been found to reduce a sphere mesh model with 1280 PC down to 516 PC while preserving the geometry and level of fidelity. Up to 156 PC can change the geometry and provide a low level of fidelity. Up to 60 PC can also change the geometry and produce LPM at a lower level of fidelity. Based on this, when the 60 PC LPM is observed on its own, it is possible to say that the model resembles the sphere object a little and takes a new form. The algorithm cannot extract LPM with PC smaller than 60 except 12 PC. Due to the disappearance of fidelity, it cannot be said that the 12 PC LPM resembles the sphere object even though it takes a new form.

Comparison of Polygon Reduction Algorithms Based on Results Obtained in Case Study
One of the LPMs obtained from each algorithm and shown in Tables 2, 3, and 4 with the same or similar PC was selected, and the geometry/topologies, levels of fidelity, and file sizes of these three LPMs were compared. While choosing LPMs, it was considered that they should be at a relative level of fidelity. Then, the HPMs obtained by subdivision of the icosahedron were also added to the comparative models. Thanks to this addition, the quality of the smoothness of geometry and the efficiency of a model that the algorithms can produce were revealed. This comparison process was made for LPMs at two different levels of detail.
Generally, when the LPMs in the first two rows and the last two rows are compared in Tables 2, 3, and 4, it can be said that the three algorithms and operations preserve the geometry better in models with a higher detail level. Therefore, it is sufficient to choose one LPM to compare the LPMs in the first two rows between algorithms. The geometry begins to transform at lower levels of detail. After a certain level, the geometric changes in LPMs cause the fidelity in abstraction to disappear. For this reason, to make comparisons between algorithms, one LPM from each table was chosen to represent the geometric changes that maintain fidelity at lower levels of detail. For the first comparison, shown in Table 5; "BLE LOD 7" (PC 384) -"3DM LOD 7" (PC 384) and "MEL LOD 7" (PC 360) LPMs and, additionally, HPM (PC 320), obtained by subdividing the icosahedron, were used. Compared to each other, the three LPMs are geometrically similar. Based on this, it is possible to say that it has the same level of fidelity. However, it appears that "3DM LOD 7" is topologically more complex and irregular. In contrast, "BLE LOD 7" is topologically more regular. On the other hand, "MEL LOD 7" is in a more symmetrical topology. These three LPMs are geometrically similar to "ABB 3" and, therefore, have the same level of fidelity. However, when viewed topologically, it is seen that they do not have the regularity in the topology of "ABB 3". Therefore, their topological structures can be said to be more dispersed.
Differences are observed when the file sizes of three LPMs with the same or close PC are compared. "BLE LOD 7" has a file size of 33.6 Kb, "3DM LOD 7" has 20.7 Kb, and "MEL LOD 7" has 21.2 Kb. Based on this, it is seen that the file size of "3DM LOD 7" is smaller than "BLE LOD 7" even though it has the same PC, and "MEL LOD 7" although it has more PC. Based on this, it is possible to say that the 3ds Max "ProOptimizer" algorithm and, therefore, the "Vertex Removal" operation is the most successful at these levels of PC in FS reduction. Even when comparing "3DM LOD 7" with "ABB 3" saved in 3ds Max, it is seen that "3DM LOD 7" has a lower file size despite it having a more complex operation process and higher polygon count. After "3DM LOD 7", "MEL LOD 7" ranks second in file size at these levels of PC.
For the second comparison, shown in Table 6, "BLE LOD 10" (PC 64) -"3DM LOD 10" (PC 64), "MEL LOD 10" (PC 80) LPMs, and, additionally, HPM (PC 80), obtained by subdividing the icosahedron, were used. When compared to each other, the three LPMs differ geometrically. "BLE LOD 10" changed geometrically less than "3DM LOD 10" and "MEL LOD 10". "MEL LOD 10" now has a different geometry than the sphere. Based on this, it can be said that the level of fidelity of "BLE LOD 10" is higher. "3DM LOD 10" is closer to "BLE LOD 10" than "MEL LOD 10" about the level of fidelity. "MEL LOD 10" evolves into a unique form, maintaining a very low level of fidelity. From these evaluations, it can be said that the Blender Decimate-Collapse algorithm, and hence the Edge Collapse operation, are more successful at these PC levels in terms of the level of fidelity and preserving geometry. It has also been determined that the MeshLab Clustering Decimation algorithm, and hence the Cell Collapse operation, give better results in creating an original form without eliminating the fidelity in abstraction.
Although "BLE LOD 10" and "3DM LOD 10" are geometrically similar, it can be seen that "BLE LOD 10" is smoother and "3DM LOD 10" is more complex and irregular when compared topologically. "MEL LOD 10" has a more symmetrical topology in itself. When these three LPMs are compared with "ABB 2", they differ geometrically again and, therefore, do not have the same level of fidelity. It is possible to say that "BLE LOD 10" and "3DM LOD 10" are similar to "ABB 2", albeit partially. But when viewed topologically, it is seen that "ABB 3" does not have regularity in its topology.
Differences are observed when the file sizes of three LPMs with the same or close PC are compared. "BLE LOD 10" has a file size of 5.71 Kb, "3DM LOD 10" has a file size of 3.48 Kb, and "MEL LOD 10" has a file size of 3.49 Kb. Based on that, the file size of "3DM LOD 10" is smaller than "BLE LOD 10" even though it has the same PC as "3DM LOD 10" and "MEL LOD 10" although it has more PC than "3DM LOD 10". "BLE LOD 10" is larger than "MEL LOD 7" in file size, even considering the ratio between PC. Based on that, it can be said that the 3ds Max "ProOptimizer" algorithm and, therefore, the "Vertex Removal" operation is the most successful at these levels of PC in FS reduction. Even when comparing "3DM LOD 10" with "ABB 2" saved in 3ds Max, "3DM LOD 10" has a lower file size, although a more complex process is performed on it. After "3DM LOD 10", "MEL LOD 10" ranks second in low file size at these levels of PC. From this point of view, it is possible to say that the MeshLab "Clustering Decimation" algorithm, and hence the Cell Collapse operation, are more successful in reducing file size than the "Blender Decimate-Collapse" algorithm, and therefore the "Edge Collapse" operation, at this level of PC.

Discussion
In general, it has been observed in many places in the field research that all algorithms try to protect the geometry as much as possible (See. BLE LOD 7, 3DM LOD 7, MEL LOD 7). It is seen in all three algorithms that LPMs that start with the decrease of 1280 PC, generally keep their spherical form up to approximately 360 PC. It is claimed that LPMs around that PC have "Full Level of Fidelity". It was observed that geometric changes started after that PC (See BLE LOD 8, 3DM LOD 8, MEL LOD 8). The geometric / shape changes that occur after this point cause the LPM to change from the spherical form and thus reduce the level of fidelity in abstraction. These changes continue until the PC is 2 (the MEL CD algorithm cannot create a 3D model with some PC.). When LPMs in the 360-2 PC range are examined, BLE LOD 8-9, 3DM LOD 8-9, and MEL LOD 8 can have almost a spherical form as they do not move away from the spherical form geometrically or in shape.
For this reason, these LPMs can be categorized as "High Level of Fidelity". BLE LOD 10, 3DM LOD 10, and MEL LOD 9 move away from the spherical form geometrically or in shape and become like this form. Therefore, it is revealed that these LPMs are at "Half Level of Fidelity". BLE LOD 11, 12, 3DM LOD 11, 12, and MEL LOD 10 are far removed from the spherical form, but they can still resemble this form a little. It can be said that they have a "Low Level of Fidelity". As a general conclusion, abstracted objects at this level have the potential to transform from the first form to a new and original form and be used in the process of creating a form into another design. This method is one of the abstraction methods used in the design phase of developing a new form. From this perspective, BLE LOD 11-12, 3DM LOD 11-12, and MEL LOD 10 LPMs provide these features and thus distinguish them from other LPMs in the study. When looking at BLE LOD 13-14-15, 3DM LOD 13-14-15, and MEL LOD 11-12 that have lower PC, it is seen that they have completely moved away from the spherical form and cannot resemble this form. These LPMs have lost their fidelity. When these LPMs are examined, it is not possible to obtain a 3D model because of the inadequacy of the PC after a certain LOD.
As a result of the general observations and examinations made throughout the article study, various conclusions were reached. First of all, presented as a hypothesis that geometric variation, level of fidelity (LOF), polygon count (PC), and file size (FS) significantly affect the performance of the polygon reduction algorithm (PRA) was confirmed. When evaluated according to these criteria, the algorithms give varying results. Comparing the LPMs produced by algorithms at any level of detail, it turns out that they have different geometry, the number of polygons, and file sizes. It also turns out that there are similarities in geometry created by polygonal abstraction at different levels of detail in both analog and digital abstraction methods. There are similarities. For example, analog abstraction can express an object with minimal geometry. A PRA can also minimize the PC of the model without breaking the geometry. In addition, abstraction enables an object to be expressed rapidly in an analog environment. Another similarity is the fact that a PRA shrinks the LOF and speeds up work on the model. Abstraction and polygon reduction methods can be used for four different purposes at different levels of detail and fidelity, or in cases of geometric change:  To produce different styles and topologies of an object with a high level of detail/fidelity with conservation of geometry,  To produce different forms and geometry from an object with half the level of detail/fidelity with a simple change in the geometry,  To produce new forms and geometry that resemble the original object in a low level of detail/fidelity,  To produce different typical forms and geometry that do not resemble the original object in a lower level of detail, with no fidelity and new geometry.
The performance comparison of the three software algorithms and operations according to the evaluation criteria is shown in Table 7. It can be seen which algorithm is more useful in which criteria in this table.
The Blender Decimate-Collapse (BLE DC) algorithm simplifies the details with a focus on preserving the geometry. It can also produce smooth topologies down to low levels of detail as face-edge flow. However, it cannot reduce the file size as much as the others. When the use of Vertex Removal (VR) operation in 3ds Max software is examined, it is seen that it does that through the "ProOptimizer" tool. Since this tool does not need to create a vertex that did not exist before, it is easier to calculate. However, for this reason, while some details of the mesh model remain, some of them disappear. The new geometry can change beyond recognition in portions where the essential details of the object are lost. Cell Collapse (CC) operation simplifies to show small details on the model with as little expression as possible. For this reason, there is a possibility that both small and vital details disappear. These results support the research done by Caradonna et al. [15] because 3DM POP and MEL CD algorithms have EC and CC operations. It is possible to conclude that these operations reduce the file size more successfully while preserving the geometry. However, because the DPMs in the same PC produced with 3DM POP have a smaller file size than those produced with MEL CD, this study leads to the opposite conclusion of the other study when comparing the Edge Collapse and Cell Collapse operations in this respect.

Conclusion and Further Remarks
It can be concluded that the automatic polygon reduction (PR) operations, algorithms, and software using them in this study are generally efficient in polygon reduction since these areas have been studied for many years. These methods can be used in many stages of the design process, especially in form development of a new object for the product, graphic and architectural designers in terms of originality, speed of work, file size, level of detail, and reduction of polygon count. "3ds Max ProOptimizer" (3DM POP) produces useful DPMs with low file sizes. For this reason, it is suggested that it is the ideal algorithm to produce objects that can both run on low-end computers and are efficient in terms of speed. Because BLE DC focuses on preserving existing geometry with new points, it is recommended to create LPM without changing the view. Since LPMs produced from MEL CD can create a new form without losing fidelity, it is recommended in the form development phase of a design process.
When evaluated regardless of the algorithm in which it will be used, there is a risk of losing essential details. At the same time, the VR operation preserves the unimportant details in the model. For a successful abstraction without losing its fidelity, the algorithms that perform this operation should analyze the models that will make PR correctly. It is proposed that creating a database where they can have access to general information about the shape of the objects expressed by these models will reduce this risk. Models such as ready-made doors, windows, and stairs in 3ds Max software show that it is possible to define some objects parametrically using software [23]. The MEL CD algorithm prioritizes simplification of relatively small details on the model. Therefore, it is suggested for use in the simplification process of organic HPMs. Yalcinkaya and Delikanli [24], developed algorithms that can generate simplified voxel-based models from HPMs, which are still used today. It is recommended to use CC operations in the MEL CD when calculating the ideal number and volume of voxels. In addition, the MEL CD can produce LPMs that maintain fidelity after a level of detail but in original form.
For this reason, it is anticipated that this algorithm can be used to find new forms in the design process. It is also a preferable algorithm in the process of designing a form with computer software itself. Based on this, it is recommended for software developers who work on artificial intelligence and machine learning in the design field.
Today, there are studies on the use of "Artificial Intelligence" that can make inferences from the formal data and make decisions with the "Machine Learning" process in "Low Poly Modeling" methods. Hä hnel et al. [25] have developed an algorithm that can produce low poly surfaces by defining large planar surfaces while 3D modeling a scanned land using a laser scanning method with mobile robots. Kalogerakis [26] creates machine learning algorithms for artificial intelligence to produce geometric shapes. In addition, studies in the field of medicine are carried out by Izard et al. [27] on the use of polygon reduction algorithms in "Virtual and Augmented Reality" applications. After this thesis study, the aim is to continue research on these concepts.