Tuesday, December 19, 2006

Flank Contouring w/ Check Surface Collision Detection

Althrough it's still not complete/successful, I just get a basic idea to prove my thoughts (or maybe just another rough explanation) for handling check surface in the standard flank contouring operation.

First I assumed every check surface can be decomposed into or approimated by multiple so-called check patches, which are small "planar" rectangles lie on the check surface. (Assumption 1)

Also I assumed the tessellated sweep segment of the tool path is approximately planar rectangle if the drive surface is simple enough and the tessellation step is small enough.(Assumption 2)

So intuitively if I can figure out how to detect the collection with a specific check patch, I can apply the same approach to solve the entire check surface.

For any orientation (rotation+translation) of a specific check patch, it can be again decomposed into the rotation on X-axis, Y-axis, and Z-axis (as well as translation on XY/YZ/XZ planes).
Note that by assumption 2, we can define the X/Y/Z axes intuitively on each sweep segment. we define XZ plane is coplanar with the sweep segment plane with X to the right, Z to the up,
and Y = cross(X,Z).

Assume some patch with only rotation on Y axis and translation on XZ plane is collided with a sweep segment of the tool path, the point which is a distance of R along the
tool backward direction away from the minimal extremum point of the intersection between patch and sweep segment (note. the intersection will be a curve on the sweep
segment), will be right on the tool axis of the collided tool.
So we can conclude that
(Lemma 1)
(tool collided with the check patch oriented by Y axis rotation and XZ translation) =>
(the translation of check patch along the tool backward direction at a distance of R should intersection with the sweep segment and the minimal extremum point lies on tool axis)
(abrv. L1 => A)

Again, assume some patch with only rotation on Z axis and translation on XY plane is collided with a sweep segment of the tool path, the offset of check surface (along the check surface
normal) by a distance R should intersect with the sweep segment, and the intersection, which is degenerated as a point on this view direction, will lie on the tool axis.
So we can again conclude that
(tool collided with the check patch oriented by Z axis rotation and XY translation) =>
(the intersection of the offseted check patch by a distance R and the sweep segment lies on the tool axis)
(abrv. L2 => B)

Finally let's take a look at the X axis rotation and YZ translation case, in this case, since the YZ plane is approximiately prependicular to the sweep segmnt (by Assumption 2), the rotation
and translation will not affect the solution of collided point, and the collision happened only if the check patch intersects with the sweep segment.
(tool collided with the check patch oriented by X axis rotation and YZ translation) =>
(the check patch intersects with the sweep segment)
(abrv. L3 => C)

Next, by Lemma1~3,
(tool collided with check patch of arbitary orientation) := L1 OR L2 OR L3 => A OR B OR C

By simple Modus Tollen that
P => Q <==> ~Q => ~P

We have
(tool collided with check patch of arbitary orientation)
=> A OR B OR C
~(A OR B OR C)
:= (~A AND ~B AND ~C)
~(tool collided with check patch of arbitary orientation)
:= (tool is NOT collided with check patch of arbitary orientation)

So if neither A, B nor C happened, then the tool is not collided with a specific check patch. This can be easily generalized for the entire check surface.

But the above statements failed to prove the tool axis can be exactly calculated by finding the minimal extremum point of the result from A, B, and C, which I used in my implementation.
Maybe this will make things more comfusing?...still wondering...

Wednesday, December 13, 2006


Today I read lots of documents and browsed lots of pages and finally find out the ultimate solution: ATM CTM or NVidia CUDA. In the past, we do GPGPU under standard graphics API such as OpenGL or DirectX which forces us to maintain the graphic states and to think and to act under a improper manner. We need to describe data in texture and encapsulate our algorithm in vertex/pixel shader to exploit the floating-point processing power in GPU. This is somehow cumbersome and in-elegant as I think. So AMD (formerly ATI) and NVidia start to work out direct approaches, that is, directly expose the computation power to user but not hide them behind the graphics API, which also suffer from driver changes.

As far as I think, this could be both good or bad for programmers and researchers. Since the support of scatter operation on X1K and G80 series via CTM or CUDA, it becomes easier to port existing algorithm onto GPUs and also easier to program. On the other hand, it becomes valueless to reinvent algorithms on GPU and thus less acadamic value in this research field. So would it be a good news to me? I don't know yet.

But still I am very interesting in researches on GPUs. I will definitely purchase one or two "modern" GPUs by next week. I believe there's still lots of topic and direction which can be further discussed to publish to journal or conferences. Just do it!

Now I am wondering which GPU should I grab...ATI X1950XTX? NVidia 7900GT? or even 8800GTX!?!?! The 8800GTX features with G80 core with capability to process DX10 function such as geometry shader, which can be very interesting in future shader research since it provides scatter capability directly in the programmable rendering pipeline. Of course the X1950 is capabile of scatter operation via CTM, but CTM is not standard in any case. I should not wait for next generation of ATI chip shipped with DX10 cuz is too late for me. Apparently the ATI X1950 is the most cost-effective choise so far. However as I compare the OpenGL extension support between 7900 and X1950, the NVidia obviously made more efforts than ATI on OpenGL to make it more suitable for general purpose compuation. However what is sure is that the 7900 does not support scatter operation while X1950 does. Also, source code on web pages push me to NVidia cuz they love NVidia....=.=

Now all the trade-offs drive me crazy...OK!! that's bet with ATI!!!!!.....?

Monday, December 11, 2006

GPU Data Compaction

Recently, I am quite intersting in the new research field for GPGPU (General Purpose Computation on Graphics Processor Unit). Today I just saw an impressive work done by German "Garnot Ziegler" who made it real to make data compaction on GPU. The video clip on his web site is almost like a hollywood shot (could be a poster on SIGGRAPH as well). I'm was wondering when will my algorithm come to real...sad...

Be Creative and Pursue for Excellency!!!!