Here we begin an exercise in processing a quarter-quad of real data. This process involves some unix commands (get yourself an emulator if you don't have the real thing) and ARC/INFO. We start with the file 122d5309.zip The head of the file looks like (with my headers):
stateplane N NAD83 orthometric geoid      GPS   return scan off-nadir GPS
 easting   northing   height  height    second  number angle angle   week
1209310.28 159905.38  269.36  195.74 157173.881390 1  -9.03   8.53 1092
1209309.17 159899.93  269.65  196.04 157173.901022 1  -9.08   8.57 1092
1209305.27 159902.94  269.95  196.33 157173.901070 1  -9.00   8.49 1092
1209301.14 159906.31  270.41  196.79 157173.901118 1  -8.91   8.41 1092
1209311.71 159891.84  269.32  195.71 157173.920607 1  -9.21   8.69 1092
1209307.81 159894.85  269.75  196.14 157173.920655 1  -9.13   8.61 1092
1209303.68 159898.23  270.18  196.56 157173.920703 1  -9.04   8.52 1092
1209299.79 159901.60  270.70  197.09 157173.920751 1  -8.95   8.44 1092
1209295.90 159904.61  270.96  197.35 157173.920799 1  -8.87   8.36 1092
1209311.09 159886.39  269.46  195.84 157173.940239 1  -9.27   8.74 1092
To select last returns:
nawk -f last.nawk 122d5309.txt >xyz_sp 
where last.nawk is:
#last.nawk   version 1
# read a lidar file and keep the last return
# converts z feet to meters
BEGIN {oldtime = 0}
/./{
   if (($5 != oldtime) && oldtime != 0){
     printf("%10.2f %10.2f %8.2f\n",x,y,z*0.3048)
   }
   x = $1
   y = $2
   z = $4
   oldtime = $5
}
END {printf("%10.2f %10.2f %8.2f\n",x,y,z*0.3048)}
Now let's sort the file and select the lowest Z for duplicate and near duplicate XY coordinates:
sort xyz_sp > xyz_sp_sort
nawk -f doubles.nawk xyz_sp_sort > goodsp
where doubles.nawk is
# /jo9/topog/lidartins/doubles.nawk   version 1
# read a sorted xyz lidar file and keep the lowest of duplicate returns
# harvey really needs only elimination of exact duplicates, but is
# using a tolerance of .02
# Because x is the primary key, the tolerance is not very effective
BEGIN {oldx = 0; oldy = 0; oldz = 0; tol = .021}
/./{
#  if (($1 == oldx) && $2 == oldy){
   dx = $1 - oldx
   absdx = (dx>0)?dx:0-dx
   dy = $2 - oldy
   absdy = (dy>0)?dy:0-dy
   if ((absdx+absdy) < tol){
     if ($3
The line counts of the files are:
 1712104 122d5309.txt  terrapoint file
 1519673 xyz_sp        without first returns
 1513707 goodsp        min z for near-duplicate points
The two nawks and a sort could have been combined into one pipe. As long as we have converted Z units to meters, we can use ARC/INFO to convert to UTM:
arc "project file xyz_sp xyz_utm27 /puppet/esri/prj/spn83toutm27"
tr -s '\011 ' '  ' < paste xyz_utm27 | tr -s '\011 ' '  ' > idxy> idxy 
Now we can build apoint coverage. The advantage of building the point coverage before constructing a TIN is that (provided no points are discarded in creating the TIN), the point IDs will be retained in ARC/INFO coverages which are created from the TIN.
generate utmpts; input idxy;points;q
build utmpts point /* alter width
tables;define idz
utmpts-id 4 7 b
z 4 7 f 2
~
add from idz
sel utmpts.pat
 alter utmpts#;;9;;;
 alter utmpts-id;;9;;;
quit
joinitem utmpts.pat idz utmpts.pat utmpts-id
createtin tinny .000001 .000001
 cover utmpts POINT z
end
tinarc tinny t122d5309arc line PERCENT
tinarc tinny t122d5309pol poly PERCENT
tables
 sel t122d5309arc.aat
  alter FNODE#;;9;;;
  alter TNODE#;;9;;;
  alter LPOLY#;;9;;;
  alter RPOLY#;;9;;;
  alter t122d5309arc#;;9;;;
  alter t122d5309arc-ID;;9;;;
 sel t122d5309pol.pat
  alter ARC#;;9;;;
  alter t122d5309pol;;9;;;
  alter t122d5309pol-ID;;9;;;
 sel t122d5309pnt.pat
  alter t122d5309pnt#;;9;;;
  alter t122d5309pnt-ID;;9;;;
QUIT

additem t122d5309pol.pat t122d5309pol.pat t1 4 7 b
additem t122d5309pol.pat t122d5309pol.pat t2 4 7 b
additem t122d5309.pat t122d5309.pat slope 4 7 f 3
additem t122d5309.pat t122d5309.pat downarc 4 7 b
This gives us ARC/INFO coverages for the points, edges, and polygons
of the TINs.  Much useful information has been automatically generated
in these coverages, and more operations can be done quickly through
ARC/INFO commands.  However, programs that step through these files are
difficult and slow.  Instead, we use a java class that match the arc
attribute table and the point attribute tables.  These classes can be used
in a program Tintopo.java (under construction) that sums downstream
flows, finds sinks, and makes new connections.