An Industrial Robot as Part of an Automatic System for Geometric Reverse Engineering

In this book we have grouped contributions in 28 chapters from several authors all around the world on the several aspects and challenges of research and applications of robots with the aim to show the recent advances and problems that still need to be considered for future improvements of robot success in worldwide frames. Each chapter addresses a specific area of modeling, design, and application of robots but with an eye to give an integrated view of what make a robot a unique modern system for many different uses and future potential applications. Main attention has been focused on design issues as thought challenging for improving capabilities and further possibilities of robots for new and old applications, as seen from today technologies and research programs. Thus, great attention has been addressed to control aspects that are strongly evolving also as function of the improvements in robot modeling, sensors, servo-power systems, and informatics. But even other aspects are considered as of fundamental challenge both in design and use of robots with improved performance and capabilities, like for example kinematic design, dynamics, vision integration.


Introduction
In many areas of engineering, medical sciences or art there is a strong demand to create appropriate computer representations of existing objects from large sets of measured 3D data points. This process is called Geometric Reverse Engineering (GRE). GRE is an important topic in computer vision but due to the advances in high accuracy laser scanning technology, GRE is now also an important topic in the geometric modelling community where it is used to generate accurate and topologically consistent CAD models for manufacturing and prototyping. Commercial GRE systems are available but still developing. There are many research groups dealing with the GRE problem and its applications in universities all over the world, e.g., Geometric Modelling Laboratory at the Hungarian Academy of Sciences and Geometric Computing & Computer Vision at Cardiff university. The CAD technology research group at Örebro University in Sweden started its GRE research in 2001. Its work is focused on problems related to automatic GRE of unknown objects. A laboratory environment has been created with a laser profile scanner mounted on an industrial robot controlled and integrated by software based on the open source CAD system Varkon. The basic principles and the system were described by (Larsson & Kjellander, 2004). They give the details of motion control and data capturing in (Larsson & Kjellander, 2006) and present an algorithm for automatic path planning in . Automatic GRE requires an automatic measuring system and the CAD research group has chosen to use the industrial robot as the carrier of the laser scanner. This has many advantages but a problem with the approach is that the accuracy of the robot is relatively low as compared to the accuracy of the laser scanner. It is therefore of certain interest to investigate how different parts of the system will influence the final accuracy of GRE operations. The author knowing that the scanner is more accurate than the robot (Rahayem et al., 2007) it is also of interest to investigate if and how 2D profile data from the scanner can be used to increase the accuracy of GRE operations. Section 3 presents the measuring system. A detailed investigation of the accuracy of the system is given in (Rahayem et al., 2007). Section 4 includes an introduction of a typical GRE operation (segmentation). The accuracy of planar segmentation based on 3D point clouds is investigated and published in (Rahayem www.intechopen.com et al., 2008). Finally, this chapter will present the implementation of a planar segmentation algorithm supported by experimental results.

Geometric Reverse Engineering
In the development of new products, computer aided design (CAD) systems are often used to model the geometry of the objects to be manufactured. Geometric Reverse Engineering GRE is the reverse process where the objects already exist and CAD models are created by interpreting geometric data measured directly from the surfaces of the objects. One application of GRE is to create a three-dimensional (3D) CAD model of some object X and use the model to manufacture new objects that are partly or completely copies of X. An introduction to GRE which is often referred to is a paper by (Vàrady et al., 1997), which divides the GRE process into four steps. The flow chart in Fig. 1 describes the sequence of these steps. The forthcoming sections will explain each step in more detail. See (Farin et al., 2002) for more details about GRE.

Data Capture
There are many different methods for acquiring shape data. The most popular methods are described in detail in (Vàrady et al., 1997). Essentially, each method uses some mechanism for interacting with the surface or the shape of the measured object. There are non-contact methods, where light, sound or magnetic fields are used with or without camera, and tactile methods where the surface is touched by using mechanical probes at the end of an arm. In each case an appropriate analysis must be performed to determine positions of points on the object's surface from physical readings obtained. In some laser range finders, for example a camera calibration is performed to determine the distance between the point and the center of the sensor. From a practical point of view, tactile methods are more robust (i.e. less noisy, more accurate, more repeatable), but they are also the slowest method for data acquisition. Coordinate Measuring Machines (CMM) can be programmed to follow paths along a surface and collect very accurate and almost noise-free data. In addition, step 1 can be done manually or automatically. Manual measuring enables full control over the measurement process and an experienced operator can optimize the point cloud to produce the best possible result. Automatic measuring can produce good results if optimized for a specific class of objects with similar shape. For unknown objects, practically automatic measuring has not reached the state where it can compete with the manual measuring as far as the quality of the point cloud is concerned. One reason for this is that automated processes do not adapt the density of the point cloud to the shape of the object. Planar regions need fewer points than sharp corners or edges to be interpreted correctly in steps 2.2 and 3.3. For optical measurement systems, like laser scanners, it is also important that the angle between the laser source and the camera in relation to the normal of the surface is within certain limits. If they are not, the accuracy will drop dramatically. A flexible system is described in (Chan et al., 2000), where a CMM is used in combination with a profile scanner. In (Callieri et al., 2004), a system based on a laser range camera mounted on the arm of an industrial robot in combination with a turntable, is presented. In this case, the robot moves the camera to view the object from different positions. In (Seokbae et al., 2002) a motorized rotary table with two degrees of freedom and a scanner mounted on a CNC machine with four degrees of freedom are used. There are many practical problems with acquiring usable data, for instance: • Calibration is an essential part of setting up the position and orientation of the measuring device. If not done correctly, the calibration process may introduce systematic errors. Laser scanner accuracy also depends on the resolution and the frame rate of the video system used.

•
Accessibility means the issue of scanning data that is not easily acquired due to the shape or topology of the part. This usually requires multiple scans and/or a flexible device for sensor orientation, for instance a robot with many degrees of freedom. Still, it may be impossible to acquire data through holes, etc.

•
Occlusion means the blocking of the scanning medium due to shadowing or obstruction. This is primarily a problem with optical scanners. • Noise and incomplete data is difficult to eliminate. Noise can be introduced in a multitude of ways, from extraneous vibrations, specular reflections, etc. There are many different filtering approaches that can be used. An important question is whether to eliminate the noise before, after or during the model building stages. Noise filtering can also destroy the sharpness of the data, sharp edges may disappear and be replaced by smooth blends. • Surface finish, i.e., the smoothness and material coating can dramatically affect the data acquisition process. Both contact and non-contact data acquisition methods will produce more noise with a rough surface than a smooth one. See (Vàrady et al., 1997) for more details about data acquisition and its practical problems.

Pre-processing
As indicated in section 2 the main purpose of GRE is to convert a discrete set of data into an explicit CAD model. The pre-processing step takes care of the acquired data and prepares it for segmentation and surface fitting. This may involve such things as merging of multiple www.intechopen.com point clouds, point reduction, triangulation, mesh smoothing, noise filtering , etc. The discrete data set typically consists of (x,y,z) coordinate values of measured data points. The points may be organized in a pattern or unorganized. In the last case, the points come from random or manual point wise sampling, in the former case the measurements may have taken place along known scan paths, which results in a sequence of scan lines or profiles, see (Larsson & Kjellander, 2006). Another important issue is the neighborhood information, for example a regular mesh implicitly gives connectivity except at step discontinuities. In other words, a topology is created that connects neighbouring points with each other, usually into a triangular or polygonal mesh. The measurement result can then be displayed as a shaded faceted surface, see (Roth & Wibowoo, 1997) and (Kuo & yan, 2005).

Segmentation and surface fitting
This section will briefly introduce segmentation and surface fitting. Later on, section 4 gives more detials and describes different segmentation methods. In reality, the surfaces of the object can be divided into sub surfaces, which meet along sharp or smooth edges. Some of them will be simple surfaces such as planes or cylinders, while others will need to be represented by more general free-form surfaces. The tasks to be solved at this stage of shape reconstruction are: • Segmentation, each point set will be divided into subsets, one for each natural surface so that each subset contains just those points sampled from a particular natural surface.

•
Classification to decide what type of surface each subset of points belongs to (e. g. planar, cylindrical, etc).

•
Fitting to find the surface of the given type which is the best fit to the points in the given subset. See (Chivate & Jablokow, 1995); (Fisher, 2004) ; (Benkö et al., 2004) for more information about surface fitting for GRE. Essentially, two different approaches of segmentation could be introduced, namely Edge based and Face or Region based methods see (Woo et al., 2002) and (Petitjean, 2002).

CAD model creation
The previous section described various alternatives for extracting geometric information from a dense set of measured data points. Due to the diversity of surface representations and applicable algorithms, the input for the model creation can be in various formats at different levels of geometrical and topological completeness. For this reason, to present a uniform approach for the final model creation would be very difficult see (Vàrady et al., 1997). (Benkö et al., 2001) describes a method to create an explicit CAD model in the form of a Boundary Representation (B-rep).

Sequential and iterative GRE
The above steps 2.1, 2.2, 2.3 and 2.4 are usually sequential and automatic but may involve different software products and threshold values and usually need to be adjusted manually to yield good results. To completely automate the entire GRE process steps 2.1 through 2.4 need to be integrated into a single system. Such a system would need a numerically controlled mechanical device for orientation of the sensor and an automatic planning algorithm to control the movements, see (Larsson & Kjellander, 2004) and (Larsson & Kjellander, 2006). An iterative approach could then be applied where the system during step 2.3 first makes a coarse measurement of the object and then iteratively more precise measurements based on the result of previous iterations. During segmentation and fitting it would also be able to increase the quality of the result by directing the measurement system to acquire complimentary data iteratively, see fig. 2. An example of an automatic GRE system is the system used to achieve the work in this chapter. This system is based on an industrial robot with a turntable, a laser profile scanner and the Varkon CAD software. The setup and installation of the system is described in (Larsson & Kjellander, 2004). In addition section 3 will give an introduction to the system and its specification. An industrial robot with a turntable is a very flexible device. It is fast and robust and relatively cheap. This is true also for a laser profile scanner. The combination is thus well suited for industrial applications where automatic GRE of unknown objects is needed in real time. A future goal is to implement all four steps of the GRE process into this system. The accuracy of a CAD model created by GRE depends of course on the basic accuracy of the measurement system used, but also on how the system is used and how the result is processed. It is important to state that the basic accuracy of an industrial robot is relatively low. The accuracy of a profile scanner is at least 10 times better, see (Rahayem et al., 2007). With a fully integrated GRE system, where the GRE software has access not only to 3D point data but also to 2D profile data and the GRE software can control the orientation of the scanner relative to the measured object's surface, it is a reasonable to believe that it may be possible to compensate to some extent for the low accuracy of the robot.

A system for automatic GRE
The automatic GRE system consists of a laser profile scanner mounted on an industrial robot with a turntable. The robot movement and scanning process is controlled trough the GRE software, which in turn is reachable via a communication interface over TCP/IP. The robot is an industrial robot of type ABB IRB 140 with S4C control system, i.e., a very common robot system. To increase the flexibility, a turntable is included in the setup. Fig. 3 shows the system. The system setup gives the potential of moving the scanner along any arbitrary scan path with a suitable and continuously varying orientation of scanner head. This ability is required to be able to perform an optimal scan, but it also increases the complexity of the scan path generation. The essential parts of the system are described in more detail below.

The robot and turntable
The robot arm is a standard ABB IRB 140 with six rotational joints, each with a resolution of 0.01º. The robot manufacturer offers an optional test protocol for an individual robot arm called Absolute accuracy. According to the test protocol of our robot arm it can report its current position within ± 0.4 mm everywhere within its working area under full load. The robot arm is controlled by a S4C controller which also controls the turntable. The turntable has one rotational joint. Repeating accuracy, according to the manufacturer, at equal load and radius 500 mm is ± 0.1 mm. This corresponds to an angle of 0.01º which is the same accuracy as the joints of the robot arm. See (ABB user's guide) and (ABB rapid reference manual) for more details on the robot. See (Rahayem et al., 2007) for more details on accuracy analysis. While not yet verified, the system could achieve even better accuracy for the following reasons: • The calibration and its verification was performed at relatively high speed, while the part of GRE measuring that demands the highest accuracy will be performed at low speed.

•
The weight of the scanner head is relatively low, which together with low scanning speed implies limited influence of errors introduced by the impact of dynamic forces on the mechanical structure. • A consequence of using a turntable in combination with the robot is that a limited part of the robot's working range will be used. Certainly the error introduced by the robot's first axis will be less than what is registered in the verification of the calibration. Another possibility to realize this system would have been to use a CMM in combination with laser profile scanners with interfaces suited for that purpose. This would give a higher accuracy, but the use of the robot gives some other interesting properties: 1. The robot used as a translation device and a measurement system is relatively cheap compared to other solutions that give the same flexibility. 2. The robot is robust and well suited for an industrial environment, which makes this solution interesting for tasks where inspection at site of production is desirable. 3. The system has potential for use in combination with tasks already robotized.

The laser profile scanner
The laser profile scanner consists of a line laser and a Sony XC-ST30CE CCD camera mounted in a scanner head which is manufactured in the Mechanical Engineering laboratory at Örebro University. The camera is connected to a frame grabber in a PC that performs the image analysis with software developed by Namatis AB Company in Karlskoga, Sweden. An analysis of the scanner head (camera and laser source), sources of errors and scanner head accuracy has been done by a series of experiments and shows that the accuracy of the scanner head is at least 10 times better than the robot's accuracy (Rahayem et al., 2007) and (Rahayem et al., 2008). Fig. 4 shows the scanner head.

Accuracy of the scanner head
In (Rahayem et al., 2007), the authors proved that an accuracy of 0.05 mm or better is possible when fitting lines to laser profiles. The authors also show how intersecting lines from the same camera picture can be used to measure distances with high accuracy. In a new series of experiments (Rahayem et al., 2008) have investigated the accuracy in measuring the radius of a circle. An object with cylindrical shape was measured and the camera captured pictures with the scanner head orthogonal with the cylinder axis. The cylinder will then appear as a circular arc in the scan window. The authors used a steel cylinder with R=10.055mm measured with a Mitutoyo micrometer (0-25mm/0.001mm) and the experiment was repeated 100 times with D increasing in steps of 1 mm thus covering the scan window. To make it possible to distinguish between systematic and random errors each of the 100 steps was repeated n=10 times, and in each of these the scanner head was moved 0.05 mm in a direction collinear with the cylinder axis to filter out the effect of dust, varying paint thickness or similar effects. The total number of pictures analyzed is thus 1000. For each distance D a least squares fit of a circle is done to each of the N pictures and the systematic and random errors were calculated using Esq. (1) and (2). The result was plotted in fig. 5 and 6.
E s and E r are the systematic and random radius errors. R and R i are the true and measured radius and N the number of profiles for each D. The maximum size of the random error is less than 0.02 mm for reasonable values of D. For more detail about accuracy analysis see (Rahayem et al., 2007) and (Rahayem et al., 2008).

The CAD system
A central part of our experimental setup is the Varkon CAD system. The CAD system is used for the main procedure handling, data representation, control of used hardware, decision making, simulation, verification of planned robot movements and the GRE process. The robot controller and the scanner PC are connected through TCP/IP with the GRE computer where the Varkon CAD system is responsible for their integration, see fig.7. Varkon started as a commercial product more than 20 years ago but is now developed by the research group at Örebro University as an open source project on Source Forge, see (Varkon). Having access to the C sources of a 3D CAD system with geometry, visualization, user interface etc, is a great advantage in the development of an automatic GRE process where data capturing, preprocessing, segmentation and surface fitting needs to be integrated. In addition, it gives a possibility to add new functions and procedures. Varkon includes a high level geometrically oriented modeling language MBS, which is used for parts of the GRE system that are not time critical but also to develop prototypes for testing before final implementation in the C sources. In general, GRE process as described earlier in section 2 above is purely sequential. A person operating a manual system may however decide to stop the process in step 2.2 or 2.3 and go back to step 2.1 in order to improve the point cloud. A fully automatic GRE system should behave similar to a human operator. This means that the software used in steps 2.2 and 2.3 must be able to control the measurement process in step 2.1. In a GRE system which is a fully automatic the goal of the first iteration may only be to establish the overall size of the object, i.e., its bounding box. Next iteration would narrow in and investigate the object in more detail. The result of each iteration can be used to plan the next iteration that will produce a better result. This idea leads to dividing the automatic GRE procedure into three different modules or steps, which will be performed in the following order: • Size scan -to retrieve the object bounding box. • Shape scan -to retrieve the approximate shape of the object. • GRE scan -to retrieve the final result by means of integration with the GRE process. Before proceeding to these steps in more detail I will give a little introduction to how path planning, motion control and data capturing procedures are implemented in the system.

Path planning
One of the key issues of an autonomous measuring system is the matter of path planning. The path planning process has several goals: • Avoid collision.

•
Optimize scanning direction and orientation.

•
Deal with surface occlusion. The process must also include a reliable self-terminating condition, which allows the process to stop when perceptible improvement of the CAD model is no longer possible. (Pito & Bajcsy, 1995) describe a system with a simple planning capability that combines a fixed scanner with a turntable. The planning of the scan process in such a system is a question of determining the Next Best View (NBV) in terms of turntable angles. A more flexible system is achieved by Combining a laser scanner with a CMM, see (Chan et al., 2000) and (Milroy et al., 1996). In automated path planning it is advantageous to distinguish between objects of known shape and objects of unknown shape, i.e., no CAD model exists in forehand. Several methods for automated planning of laser scanning by means of an existing CAD model are described in the literature, see for example (Xi & Shu, 1999); (Lee & Park, 2001); (Seokbae et al., 2002). These methods are not directly applicable to the system as the system is dealing with unknown objects. For further reading in the topic of view planning see (Scott et al., 2003), which is a comprehensive overview of view planning for automated three dimensional object reconstructions. In this chapter, the author uses manual path planning in order to develop automatic segmentation algorithms. In future work, the segmentation algorithms will be merged with the automatic planning. In manual mode the user manually defines the following geometrical data which is needed to define each scan path: Window turning curve (optional).

•
Tool center point z-offset (TCP z-offset). Fig. 8 shows a curved scan path modeled as a set of curves. The system then automatically processes them and the result after the robot has finished moving is Varkon MESH geometric entity for each scan path. In automatic mode, the system automatically creates the curves needed for each scan path. This is done using a process where the system first scans the object to establish its bounding box and then switches to an www.intechopen.com algorithm that creates a MESH representation suitable for segmentation and fitting. This algorithm is published by (Larsson & Kjellander, 2006).

Motion control
To control the robot a concept of a scan path is developed, which is defined by geometrical data mentioned in the previous section, see fig. 8. This makes it possible to translate the scanner along a space curve at the same time as it rotates. It is therefore possible to orient the scanner so that, the distance and angle relative to the object is optimal with respect to accuracy, see (Rahayem et al., 2007). A full 3D orientation can also avoid occlusion and minimize the number of re-orientations needed to scan an object of complex shape. A full description of motion control is described in (Larsson & Kjellander, 2006). After the user has defined the geometrical data which defines a scan path, the Varkon CAD system calculates a series of robot poses and turntable angles and sends them to the robot. While the robot is moving it collects actual robot poses at regular intervals together with a time stamp for each actual pose. Similarly, the scanner software collects scan profiles at regular intervals, also with time stamps, see fig. 9. When the robot reaches the end of the scan path all data are transferred to the Varkon CAD system, where an actual robot pose is calculated for each scan profile by interpolation based on the time stamps. For each pixel in a profile its corresponding 3D coordinates can now be computed and all points are then connected into a triangulated mesh and stored as a Varkon MESH entity. Additional information like camera and laser source centers and TCP positions and orientations are stored in the mesh data structure to be used later into the 2D pre-processing and segmentation. The details of motion control and data capturing were published in (Larsson & Kjellander, 2006). Fig. 7 shows how the different parts of the system combine together and how they communicate.

The automatic GRE procedure
As mentioned above, our automatic GRE process is divided into three modules: size, shape and GRE scan module. These modules use all techniques described in 3.4, 3.5 and 3.6, and also common supporting modules such as the robot simulation which is used to verify the planned scanning paths. Each of the three modules performs the same principal internal iteration: • Plan next scanning path from previous collected information (the three modules use different methods for planning).

•
Verify robot movement with respect to the robot's working range, collision etc.

•
Send desired robot movements to the robot.

•
Retrieve scanner profile data and corresponding robot poses.

• Register collected data in an intermediate model (differs between the modules). •
Determine if self terminating condition is reached. If so, the iteration will stop and the process will continue in the next module until the final result is achieved. The current state of the system is that the size scan and the shape scan are implemented. The principles of the size, shape and GRE modules are: • Size Scan module. The aim of this module is to determine the object extents, i.e., its bounding box. It starts with the assumption that the size of the object is equal to the working range of the robot. It then narrows in on the object in a series of predefined scans until it finds the surface of the object and thus its bounding box. To save time, the user can manually enter an approximate bounding box as a start value. • Shape Scan module. The implementation of this step is described in detail in . It is influenced by a planning method based on an Orthogonal Cross Section (OCS) network published by (Milroy et al., 1996). • GRE Scan module. This module is under implementation. The segmentation algorithms presented in this chapter will be used in that work. The final goal is to automatically segment all data and create an explicit CAD model.

Segmentation
Segmentation is a wide and complex domain, both in terms of problem formulation and resolution techniques. For human operators, it is fairly easy to identify regions of a surface that are simple surfaces like planes, spheres, cylinders or cones, while it is more difficult for a computer. As mentioned in section 2.3, the segmentation task breaks the dense measured point set into subsets, each one containing just those points sampled from a particular simple surface. During the segmentation process two tasks will be done in order to get the final segmented data. These tasks are Classification and Fitting. It should be clearly noted that these tasks cannot in practice be carried out in the sequential order given above, see (Vàrady et al., 1997). www.intechopen.com

Segmentation background
Dividing a range image or a triangular mesh into regions according to shape change detection has been a long-standing research problem. The majority of point data segmentation approaches can be classified into three categories. In (Woo et al., 2002) the authors defined the three categories as follows:

Edge-based approaches
The edge-detection methods attempt to detect discontinuities in the surfaces that form the closed boundaries of components in the point data. (Fan et al., 1987) used local surface curvature properties to identify significant boundaries in the data range. (Chen & Liu, 1997) segmented the CMM data by slicing and fitting them by two-dimensional NURBS curves. The boundary points were detected by calculating the maximum curvature of the NURBS curve. (Milory et al., 1997) used a semi-automatic edge-based approach for orthogonal crosssection (OCS) models. (Yang & Lee, 1999) identified edge points as the curvature extremes by estimating the surface curvature. (Demarsin et al., 2007) presented an algorithm to extract closed sharp feature lines, which is necessary to create such a closed curve network.

Region-based approaches
An alternative to edge-based segmentation is to detect continuous surfaces that have homogeneity or similar geometrical properties. (Hoffman & Jian, 1987) segmented the range image into many surface patches and classified these patches as planar, convex or concave shapes based on non-parametric statistical test. (Besl & Jian, 1988) developed a segmentation method based on variable order surface fitting. A robust region growing algorithm and its improvement was published by (Sacchi et al., 1999); (Sacchi et al., 2000).

Hybrid approaches
Hybrid segmentation approaches have been developed, where the edge and region-based approaches are combined. The method proposed by (Yokoya et al., 1997) divided a three dimensional measurement data set into surface primitives using bi-quadratic surface fitting. The segmented data were homogeneous in differential geometric properties and did not contain discontinuities. The Gaussian and mean curvatures were computed and used to perform the initial region based segmentation. Then after employing two additional edgebased segmentations from the partial derivatives and depth values, the final segmentation result was applied to the initial segmented data. (Checchin et al., 2007) used a hybrid approach that combined edge detection based on the surface normal and region growing to merge over segmented regions. (Zhao & Zhang, 1997) employed a hybrid method based on triangulation and region grouping that uses edges, critical points and surface normal. Most researches have tried to develop segmentation methods by exactly fitting curves or surfaces to find edge points or curves. These surface or curve fitting tasks take a long time and, furthermore it is difficult to extract the exact edge points because the scan data are made up of discrete points and edge points are not always included in the scan data. A good general overview and surveys of segmentation are provided by (Besl & Jian, 1988); (Petitjean, 2002); (Woo et al., 2002); (Shamir, 2007). Comparing the edge-based and region-based approaches makes the following observations: • Edge-based approaches suffer from the following problems. Sensor data particularly from laser scanners are often unreliable near sharp edges, because of specular reflections there. The number of points that have to be used to segment the data is small, i.e., only points in the vicinity of the edges are used, which means that information from most of the data is not used to assist in reliable segmentation. In turn, this means a relatively high sensitivity to occasional spurious data points. Finding smooth edges which are tangent continuous or even higher continuity is very unreliable, as computation of derivatives from noisy point data is error-prone. On the other hand, if smoothing is applied to the data first to reduce the errors, this distorts the estimates of the required derivatives. Thus sharp edges are replaced by blends of small radius which may complicate the edge-finding process, also the positions of features may be moved by noise filtering. • Region-based approaches have the following advantages; they work on a large number of points, in principle using all available data. Deciding which points belong to which surface is a natural by-product of such approaches, whereas with edge-based approaches it may not be entirely clear to which surface a given point belongs even after we have found a set of edges. Typically region-based approaches also provide the best-fit surface to the points as a final result. Overall, Authors of (Vàrady et al., 1997); (Fisher et al., 1997); (Robertson et al., 1999); (Sacchi et al., 1999); (Rahayem et al., 2008) believe that region-based approaches rather than edge-based approaches are preferable. In fact, segmentation and surface fitting are like the chicken and egg problem. If the surface to be fitted is known, it could immediately be determined which sample points belonged to it. It is worth mentioning that it is possible to distinguish between bottom-up and top-down segmentation methods. Assume that a region-based approach was adopted to segment data points. The class of bottom-up methods initially start from seed points. Small initial neighbourhoods of points around them, which are deemed to consistently belong to a single surface, are constructed. Local differential geometric or other techniques are then used to add further points which are classified as belonging to the same surface. The growing will stop when there are no more consistent points in the vicinity of the current regions. On the other hand, the top-down methods start with the premise that all the points belong to a single surface, and then test this hypothesis for validity. If the points are in agreement, the method is done, otherwise the points are subdivided into two (or more) new sets, and the single surface hypothesis is applied recursively to these subsets to satisfy the hypothesis. Most approaches of segmentation seem to have taken the bottom-up approach, (Sapidis & Besl, 1995). While the top-down approach has been used successfully for image segmentation; its use for surface segmentation is less common. A problem with the bottom-up approaches is to choose good seed points from which to start growing the nominated surface. This can be difficult and time consuming. A problem with the top-down approaches is choosing where and how to subdivide the selected surface.

Planar segmentation based on 3D point could
Based on the segmentation described in sections 2.3 and 4.1, the author has implemented a bottom-up and region-based planar segmentation approach in the Varkon CAD system by using the algorithm described in (Sacchi et al., 1999) with a better region growing criterion. The segmentation algorithm includes the following steps: www.intechopen.com 1. Triangulation by joining points in neighbouring laser profiles (laser strips) into a triangular mesh. This is relatively easy since the points from the profile scanner are ordered sequentially within each profile and the profiles are ordered sequentially in the direction the robot is moving. The triangulation algorithm is described in (Larsson & Kjellander, 2006). 2. Curvature estimation. The curvature of a surface can be calculated by analytic methods which use derivatives, but this can not be applied to digitized (discrete) data directly and requires the fitting of a smooth surface to some of the data points. (Flynn & Jain, 1989) proposed an algorithm for estimating the curvature between two points on a surface which uses the surface normal change between the points. For more details about estimating curvature of surfaces represented by triangular meshes see (Gatzke, 2006). In order to estimate the curvature for every triangle in the mesh, for any pair of triangles which share an edge one can find the curvature of the sphere passing through the four vertices involved. If they are coplanar the curvature is zero. In order to compensate for the effect of varying triangle size, compensated triangle normal is used as follows: • Calculate the normal for each vertex, which is called interpolated normal, equal to the weighted average of the normals for all triangles meeting at this vertex. The weighting factor used for each normal is the area of its triangle.

•
Calculate the compensated normal for a triangle as the weighted average of the three interpolated normals at the vertices of the triangle, using as weighting factor for each vertex the sum of the areas of the triangles meeting at that vertex.

•
Calculate in a similar way the compensated centre of each triangle as the weighted average of the vertices using the same weighting factor as in the previous step.

•
For a pair of triangles with compensated centres C 1 and C 2 and N 1 ⊗ N 2 is the cross product of the compensated normals, the estimated curvature is: For a given triangle surrounded by three other triangles, three curvature values are estimated. In similar way another three curvature values will be estimated by pairing the compensated normals with the interpolated normals at each of the three vertices in turn.

•
The triangle curvature is equal to the mean of the maximum and minimum of the six curvature estimates obtained for that triangle. 3. Find the seed by searching the triangular mesh to find the triangle with lowest curvature. This triangle will be considered as a seed. 4. Region Growing adds connected triangles to the region as long as their normal vectors are reasonably parallel to the normal vector of the seed triangle. This is done by calculating a cone angle between the triangle normals using the following formula: Where, N 1 • N 2 is the dot product of the compensated triangle normals of the two neighbouring triangles respectively. 5. Fit a plane to the current region. Repeat the steps 3 and 4 until all triangles in the mesh have been processed, for each segmented region. Fit a plane using Principle Components Analysis, see (Lengyel, 2002). The difference between the algorithm described in this section and Sacchi algorithm described in (Sacchi et al., 1999) is that Sacchi allowed a triangle to be added, if its vertices lie within the given tolerance of the plane associated with the region while the algorithm described here allows a triangle to be added if the cone angle between its compensated normal and the seed's normal lie within a given tolerance. This makes the algorithm faster than the Sacchi algorithm since it uses already calculated data for the growing process instead of calculating new data. For more details about this algorithm refer to (Rahayem et al. 2008).
The test object Mesh before segmentation Mesh after Segmentation Figure 10. Planar segmentation based on point cloud algorithm

Conclusion and future work
The industrial robot equipped with a laser profile scanner is a desirable alternative in applications where high speed, robustness and flexibility combined with low cost is important. The accuracy of the industrial robot is relatively low, but if the GRE system has access to camera data or profiles, basic GRE operations like the fitting of lines can be achieved with relatively high accuracy. This can be used to measure for example distance or radius within a single camera picture. Experiments that show this are published in (Rahayem et al., 2007); (Rahayem et al., 2008). The author also investigated the problem of planar segmentation and implemented a traditional segmentation algorithm section 4.2 based on 3D point clouds. From the investigations described above it is possible to conclude that the relatively low accuracy of an industrial robot to some extents can be compensated if the GRE software has access to data directly from the scanner. This is normally not the situation for current commercial solutions but is easy to realize if the GRE software is integrated with the measuring hardware, as in our laboratory system. It is natural to continue the work with segmentation of conic surfaces. Cones, cylinders, and spheres are common shapes in manufacturing. It is www.intechopen.com therefore interesting to investigate if 2D profile data can be used in the GRE process also for these shapes. The theory of projective geometry can be used to establish the shape of a conic curve projected on a plane. A straight line projected on a conic surface is the inverse problem and it would be interesting to investigate if this property could be used for segmentation of conic surfaces. 2D conic segmentation and fitting is a well known problem, but the author has not yet seen combined methods that use 2D and 3D data to segment and fit conic surfaces. Another area of interest is to investigate to what extent the accuracy of the system would be improved by adding a second measurement phase (iteration) which is based on the result of the first segmentation. This is straightforward with the described system as the GRE software is integrated with the measurement hardware and can control the measurement process. The system would then use the result from the first segmentation to plan new scan paths where the distance and orientation of the scanner head would be optimized for each segmented region. I have not seen any work published that describes a system with this capability.