QUESTION01
01-1 Well, I really like it. I hope people use it!
Enter your question or comment in this box: Solution B: Mac Netscape users can View>Source... to open Simple Text, start writing at bottom, then paste into comment box. 1/27/95
anonymous 03-4 In 03-3, the stub-in code got taken too literally as html, not text. Intended change just sets columns to 110, which makes text box appropriate for an average screen and drops it to its own line. The larger box is easier to work in. 1/27/95anonymous 03-5 I think this is what was meant above: I took the liberty of editing this out, as I found it confusing. basically, this person put in a response box of 110 columns. We have changed the size of the response box to 80 characters (bc) 03-6 It might be nice to have a cancel or edit option at the point where the user sees how a given question or response is going to look. Here's an easy workaround (surf your own hard drive): view the current page as source. This opens the SimpleText editor. Select All and Cut. This gives a blank slate. Write out the question or response. Save As anything.html to the desktop. Back in Netscape, Open File and double click on your new .html document. View the document just as it will appear on the JPL home page. Save changes as needed. Copy and paste into the 'Submit Comments box and submit! 2/ 1/95 03-7 I changed the width of the text area from 60 to 80 chars. Hope this helps. 2/ 2/95Sharon Okonek, JPL, sharon.okonek@jpl.nasa.gov QUESTION04I see where someone has done an end run around the name-affiliation-email paperwork by creating a single paste 'send mail' trigger link in question 22. Can you set things up so all email addresses are provided like this automatically? 2/ 1/95
15-5 Well, I'm not sure. I am not sure if the link in 22 was manually input. It might only be a netscape feature. It may be possible if the netscape software can handle it... 2/ 1/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov 15-6 So far as linking in graphics at the bulletin board, this is certainly an interesting idea... 2/ 1/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov 15-7 After talking it over, we have decided not to enable linking small graphics into the bulletin board with the img src html command, as then we start having to worry about the amount of disk space that we would have to allocate for this (once it is in a question, we have to keep the image as long as we keep the message). And then there is the problem of inappropriate images. As it is, you can put a link to an image, you just have to click. If you have an image that you would like everyone to see, then send me an email, and we can possibly put it into the user contribution section for everyone to see. 2/ 6/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov 15-8 How do you feel about bb users submitting html tables? Like with images, , File sizes stay small with tables; as with images, poor judgement might be exhibited. tom @ emerald
|
For NIH-Image 1.57 freeware , select Import rather than Open. Choose Custom and edit to width and height (e.g., 2000 x 1266) and the file will open. tom@emerald 2/19/95
QUESTION322 - Because the Sample data sets are so large, it is not very feasible at this time to put the data on-line.
3 - We have put a SAR reference list on-line, though it is admittedly not complete, it is the best that we have right now.
2/15/95
bruce chapman, jpl, bruce.chapman@jpl.nasa.gov 32-2 Nice job on expanding the SAR reference list -- last time I looked it only had one entry! I wonder if some of the papers could be put online, or at least the abstracts, or links if the journals are ftp sites -- one hopes that some of the papers are still around, electronically speaking, and wouldn't have to be OCRd in. I'm at a large research university and frankly, no one has ever heard of some of these journals. 3/19/95 QUESTION34I got the following information from the EROS Data center SIR-C survey image home page at this location:
IMDISP: This software is a general-purpose image display program. It supports EGA, VGA and numerous super-VGA display boards. This program is available and free to have a copy. For further information, contact Mike Martin at (818)306- 6038 (JPLDPS::MMA.RTIN on SPAN)
I think his email address is mmartin@jplpds.jpl.nasa.gov
Maybe someone else knows where to get this software on-line?
3/ 2/95Bruce Chapman, jpl, bruce.chapman@jpl.nasa.gov 40-2 I just recently discovered that this program for PC type computers is on the pre-flight SIR-C education CD-ROM 3/14/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov QUESTION44There is some potential for real confusion here for people used to 24-bit color monitors. It seems like 24 bits is more than enough to hold 11 bits. However, that 24 = 3 x 8, and no single 8 is large enough to display 11, i.e., the separate RGB screen phosphors are only set to display 256 levels each.
Of course, rounding off to the leading 8 digits and displaying as a grayscale 'byte image' is one possiblility. This is a waste of good data -- why send the instrumentation up there and not make use of it? Another option few people will exercise is staring at the array of 640 x 2842 numbers in Word. For other purposes, such as computing slopes or aspects from the DEM, one could do the calculations on the 11-bit numbers in a spreadsheet, then round off to 8 bits after finishing, obtaining some value in enhanced accuracy from the extra bits.
Let's not despair. Here's a easy way (but not necessarily the ultimate way) to display all 11 bits at once using color to supplement the limitations of 8 bit channels. Procede as in 44-3. Then, in the Channels palette, duplicate the trailing bit channel, split, and merge as L.a.b., with leading bits in channel a. This gives a knock-your-socks-off color image explicitly displaying the full resolution of the data, with color trends indicating the broader contour intervals of the leading bits.
I will try to put a small piece of the resulting image into the Lost and Found folder at the ftp site; otherwise, email it to Bruce for inclusion on the home page at his discretion.
tom pringle emerald imagery email:tingalsb@oregon.uoregon.edu 3/15/95
44-5 Another neat way to view the data in color is to place the CVV ground image in L of L.a.b. or B of HSB (Hue, Saturation, Brightness) color modes and the DEM data in the other two channels. This gives the radar image of the ground over-tinted with colored contour bands. The fine-structure DEM is very dramatic when viewed in a palette of ten steps of green. Since the channels are all 640 x 2842, there are no co-registration issues. To reduce file size for rapid color experimentation, co-crop the full 3-channel image and split the channels.Of course, what Tom Farr really wants us to do with these files is use the digital elevation map to draw a 2-D surface in perspective, then drape the CVV radar image over it. If the DEM has delicately colored contour bands (so as not interfere with the image), an overall 'airplane window' effect is achieved, with a subtle guide to actual elevation to facilitate interpretation of the alluvial fans. Photoshop and the like can only prep the DEM as a DXF file for export to high-end programs such as Strata 3-D. A more affordable solution that has many other imaging uses is macGIS ($30 in the educational version). I am not aware of any freeware up to the task. Again, I'm willing to donate the final file but there is no way to post it to the TOPSAR folder at this time.
tom @ emerald imagery 3/15/95
44-6 I'm impressed with what Tom Pringle has accomplished with Photoshop and the DEMs! I was never able to get Photoshop to ingest the 16-bit topo data, but with his hints, I'll try again. A program I've used on the Mac that accepts 16-bit data is the free program NIH Image. It of course scales the image to 8 bits for display, but retains the transformation, so you can find out what the height of that mountain, etc., is. It allows color tables to be manipulated, etc., but is far less powerful than Photoshop in terms of overlays and color images. I indeed like to overlay images onto topography for perspective views, but there are many other ways to use the DEMs for earth science applications. One that I'm involved in now is the fitting of 3-dimensional equations to landforms, such as alluvial fans (the shallow cones of gravel coming off the mountains in the death valley image). See the Transactions of the Amer. Geophys. Union (Eos), v. 73, p. 553-558 for some details. Other applications are mentioned in an article on TOPSAT (a spaceborne interferometer on the drawing boards) that should be coming out soon in the same newsletter. 3/20/95Tom Farr, JPL, tom.farr@jpl.nasa.gov 44-7Tom, that's a new one to me, about
What year is your TAGU (Eos) article and did you mean by newletter that it is an electronic journal or list server? Your point is a valid one: for accurate equation-fitting of alluvial fans (and other landforms), the data need not be visualized, so 16 bit data is no problem. However, my first impulse would be just display the hypsometric layered tint (24 = 8 x 3) and stop with that because the ground scale of the data pixels falls short of topographic variation significant to physical processes, at least in the case of the Death Valley alluvial fans. I'm not sure I understand the advantage of analytic equations: the data is rasterized from day one, so why not just calculate from it directly as a GIS layer? OK, Ok, I'll look at the paper....tom @ emerald 3/26/95
QUESTION42
3/15/95
The radar home page documentation seems to rely on an easy-to-read JPL analysis published in Radio Science, vol. 22, page 529. 1987, to whose notation and diagrams I refer below.
As I understand it, the Shuttle instrument basically collects as primary data the matrix parameters, which relate transmitted and received polarization vectors by a 2x2 complex matrix. This matrix needs to be invertible from considerations of time-reversal invariance of the electromagnetic interaction, so its determinant is not zero. Sequential scattering is carried by matrix multiplication. In short, the set of all possible scattering matrices forms the non-compact Lie group Gl(2,C), an 8-dimensional real manifold.
The JPL article states that an overall phase can be neglected, leaving 7 parameters. This is tantamount to quotienting by the set of all unitary diagonal matrices, a normal subgroup, i.e., GL(2,C)/U(1). The article goes on to cite a principle of reciprocity developed in an unpublished1965 Dutch doctoral dissertation which sets off-diagonal elements equal, leaving 5 parameters. Now symmetric complex matrices (cv. self-adjoint) lack group closure: the product of two symmetric matrices is not in general symmetric (because transpose reverses order and multiplication is not commutative). Yet sequential scattering is a fact of life, so the 1965 proposal may be invalid. This reciprocity principle is a source of confusion in # 20 and #25.
Removing all scalar matrices from the scattering matrix yields GL(2,C)/GL(1,C) = SL(2,C)/Z(2) = SO(3,1) of dimension 6, or passing to maximal compact simple subgroups, SU(2)/Z(2)=SO(3). We now see the double cover explains the origin of the double angles in the Poincare sphere representation of the ellipticity diagram used in your polarization signatures. Many physicists would assume this group from the get-go, invoking general unitarity principles for scattering matrices, the need for the inverse matrix to be the adjoint, conservation of the energy-momentum 4-vector, etc.
Now a circularly polarized electromagnetic field carries angular momentum. Pictorally, the electric field vector rotates about a cylinder whose axis is the direction of translation. In modern terms, the photon is a massless spin 1 gauge boson transforming under integral representations of the proper Lorentz group SO(3,1). Plane polararized light is at the other extreme (zero spin), while ellipitcally polarized light is intermediate. In the Poincare sphere, the angular momentum must in effect be the projection of the elliptical polarization vector on the z axis. Note equatorial polarizations (plane-polarized) have no z-component, as expected.
It os instructive to consider orbits of symmetries that conserve angular momentum, namely the subgoup SO(2) = U(1) of the rotation group SO(3). The action rotates the Poincare sphere on its axis, fixing left and right circular polarizations, taking elliptical polarizations onto their latitude lines, and stabilizing (inter-converting) plane polarizations. This suggests parameterizing polarization signatures by a longitude.
If only magnitude (not sign) is of interest, inversions (reflections that reverse spatial orientation) identify north and south poles, corresponding latitude lines and antipodal equitorial points, reducing the Poincare sphere to RP(2), the real projective plane. Involutions provide a useful eignespace decomposition. The overall symmetry is expanded to the O(2) subgroup of O(3). More generally, Maxwell's equations are invariant under the full Lorentx group O(3,1)=[Z(2) x Z(2)] x' SO(3,1).
So the data seems subject to very substantial symmetry reduction. Is it not far better for JPL to crunch the data once and for all than to send out huge unreduced CD-ROMs and explain group theory to a thousand end users? The polarization signatures (currently imaged as a quasi-Mercator surface or as level curves on the Poincare sphere) positively glow with redundant symmetry. Where are our friends, the spherical harmonics? Is there not an even and odd decompostion of isotropy and anisotropy? Eigenbases of roughness vectors? Higher order terms that drop? I guess I am asking for a planned features list of your forthcoming software, macSigma 0.
Notice that both AIRSAR and SIR-C/XSAR provide us initially with undisplayable images, that is, virtual images whose pixels are Stokes matrices. While I don't have a personal issue with formal manipulations of mulit-dimensional arrays of matrices, image enhancement and scientific interpretation would be greatly facilitated by having monitor-displayable byte images. The real question is, how many real byte image channels will be needed to display all the information, assuming symmetry effects, under-utilized dynamic range, and intra- and inter- band statistical correlation can be removed? And how can these channels be chosen so that they retain physical interpretability?
Sorry to be so long-winded! 3/19/95
To say, for example, LVV isn't correlated [linearly predictable] from LHH is not to say there isn't some other simple relation between them. This might be adduced from theoretical grounds or empirically from the scatter diagram. I must respectfully disagree with decoupling the analysis of phase and amplitude correlations. [This procedure already wreaks havoc with real variables, where the sign serves as 'phase.' All the arithmetic operations needed to calculate correlation, such as subtraction, norms, and division, hold for complex numbers. If we would look in a sufficiently obscure statistics journal, we would find statistics was long ago extended to the case of correlation of complex [or quaternionic, 'phase' a point on 3-sphere] variables. Correlation is inherently a geometric concept.
I am not sure how much we are going to learn about the physics producing the scattering event. This is because If the radar pixels are, say, 25 meters square, there are only 9 per acre. A lot of different things can go on across this ground scale. Aren't we learning instead about the superpositioning of a large number of possibly unrelated radar reflection events that produced the final composite signal?
I couldn't find very many images to test on the bulletin board. However, the Weddel Sea used LHV and LHH as R and G. These images may have gone through many stages of non-linear tweaking. As posted, the correlation coefficient is indeed low. The scatter plot shows even better that the case is hopeless.
Next, I looked at the AIRSAR Death Valley Images for the L, C, and P radar bands. I did the principal components in Dimple, finding little inter-band linear correlation as these things go:
Never heard of a Frost filter -- what is it supposed to do; does it have another name? tom @ emerald
!local Lee filter as Dimple IOL script
images
x "whatever" input ;
localmean "so" temp ;
lmsqd "po" temp ;
lsdinitial "io" temp ;
mlsdsqd "YP" temp ;
lsd "uu" temp ;
final "Lee filtered" output ;
operations
localmean = filter x (1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1 ) / 25 ;
lmsqd = localmean * localmean;
lsdinitial = x * x;
mlsdsqd = filter lsdinitial (1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1 ) / 25 ;
lsd = sqrt( mlsdsqd - lmsqd );
final = If (x > 1.5 * lsd) then localmean;
Else x ;
endif ;
!local Lee gaussian blur as Dimple IOl script
images
x "whatever" input ;
localmean "so" temp ;
lmsqd "po" temp ;
lsdinitial "io" temp ;
mlsdsqd "YP" temp ;
lsd "uu" temp ;
gauss "pp" temp ;
final "Lee filtered" output ;
operations
localmean = filter x (1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1 ) / 25 ;
gauss = filter x (1,1,2,1,1,1,2,4,2,1,2,4,8,4,2,1,2,4,2,1,1,1,2,1,1) / 52 ;
lmsqd = localmean * localmean;
lsdinitial = x * x;
mlsdsqd = filter lsdinitial (1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1 ) / 25 ;
lsd = sqrt( mlsdsqd - lmsqd );
final = If (x > 1.5 * lsd) then gauss ;
Else x ;
endif ;
3/25/95
42-1 Pam Logan, of the China Exploration and Research Society, is one person looking at the data. You can contact her at pamlogan@alumni.caltech.edu. The World Monument Fund has some people working on this, too. 3/ 7/95Tom Farr, JPL, tom.farr@jpl.nasa.gov
42-2 I'm a student that is new at the net. I would like some help on how to surf the net. Thanks
3/15/95
QUESTION29
29-1
Remote Sensing Sites
Imaging Radar
Canada
USGS, USFWS, BLM, USFS
Other Planets
Imaging Software
24-1 There is often very little correlation between the like-polarized (HH and VV)
channels and the cross-polarized (HV or VH) channels in polarimetric radar
measurments. The correlation between HH and VV can be anywhere between 0 and 1
and often carries a great deal of information about the physics of the
scattering which produced the measurements. An additional complication is that
the phase difference between HH and VV (whicha re actually complex values)
also carries a lot of information.I can send you some materials on this if you
send me your street address via e-mail.
Tony Freeman, tony.freeman@jpl.nasa.gov 3/16/95Tony Freeman, JPL
24-2 Thank you, Tony. I still have a long-winded question for the bulletin board about the raw SIR-C data set and its subsequent digestion. I have been puzzling over questions and responses of #20 and #25, and now #24.
L,C,P Inter-Band Correlation
Component Variance
PCA 1 79.0%
PCA 2 17.6%
PCA 3 2.7%
The table needs a modern browser to display its excesses. tom @ emerald
3/26/95
QUESTION48
48-1 When I just tried it, (Friday 3pm california time), it ranged from 10 seconds to
download a browse image, to a couple of minutes to download the same file. When it
takes a long time, I recommend trying again later, the EROS data center server is
probably overloaded with requests. 3/17/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov
48-2 I suspect that the CD-ROM is being automatically loaded and this takes time.
There are over 50 CD-ROMS, so they are not all on line I suspect. 3/27/95
QUESTION30
30-1 check http://images.jsc.nasa.gov/html/earth.htm for listings of hand-held photography from the second flight, by lat, lon, or frame number. Unfortunately, they don't list the pix by MET, so it'll be hard to cross-reference to the radar images... 3/23/95Tom Farr, JPL, tom.farr@jpl.nasa.gov
QUESTION50
50-1 I unfortunately do not have a reference for you about Lee and Frost filtering, but
the slant to ground range conversion may be done by simple geometry. In the azimuth
direction, no correction is necessary unless you want to resample the pixel spacing in that direction.
In the range direction, the ground range pixel spacing is the slant range pixel spacing
divided by the sine of the incidence angle. The tricky part is the resampling to the
ground range projection, I am not sure about which interpolation to use, you might want to experiment with several... 3/24/95Bruce Chapman, jpl, bruce.chapman@jpl.nasa.gov
50-2 Here is a freeware Lee filter, actually something better, a script for local Lee sigma filtering (an adaptive filter that replaces a bad pixel with the local mean if it is a user-specified number of local standard deviations from the local mean). An enclosed even better variation replaces the bad pixel under the same circumstances with the local gaussian blur instead of the local mean.
tom@emerald 4/19/95
58-2 I would like to know the dates on which the variour radar images for the contest were taken (e.g., spring or fall), the slant angle for each channel, and also a list of processing steps taken to get everything registered, resized, and byte-imaged. For example, if some channels have experienced interpolation, was this done with bicubic? Might it be possible to add an eighth 'image' of more primary unedited data?tom @ emerald 4/19/95
58-3 Ahhh, I spelled it wrong, I apologize! I will also add more information as you suggest. 4/21/95Bruce Chapman, jpl, bruce.chapman@jpl.nasa.gov 58-4 Do contest rules allow submissions in the form of Web html pages? Because explanatory notes on methodology are required, this might be a good way to integrate text and image(s). The one meg size limit could still be enforced.tom @ emerald
4/22/95
58-5 I stumbled two other Sunbury, Pa files on your server that didn't make it into the contest or the radar home page, namely PennContest_Ctot and LTot, evidently total backscatter intensity from the C and L bands. Did you derive Cvv from Chh, Chv, and Ctot for purposes of the seven channels of the contest?tom @ emerald
4/22/95
58-6 Now I see that Ctot and Ltot are displaced to the north from the other images and do not afford the same coverage. They do have some value in understanding the geological context, though. 4/22/95 58-7 The contest have evolved slightly since conception. Originally, the seven gifs available did not include the vv channel, but instead had the total power. Since the total power is just a function of the other channels, however, I decided to change the channels available to hh, hv, and vv. They were of a slightly different area. I also did not have the raw files available, and I was concerned that the pixel spacing was not exactly 50 meters.In fact, the images of Sunbury are a subset of a much larger, higher resolution image. However, in the interest of keeping the files small, we averaged the images by a factor of 4, and we took just a subset of the image.
I think it is a great Idea to make an html page that describes the image that you create. If anyone wants to make a clickable image map, I will do what I have to do on my end to make it work... 4/24/95
bruce chapman, jpl, bruce.chapman@jpl.nasa.gov 58-8 I hadn't though of a clickable map -- great idea and most appropriate. I think this is preferable to degrading the image with overlain text and boundaries -- now the viewers can click over to something they want to see interpreted more cleanly. Suppose we get the clickable map going on our our drive -- will it be portable?It would be cool to see the wider area, higher resolution file, say an RGB as Ctot, Ltot, X. This would provide contestants with more context as well as ability to test their interpretations. It's tough now with less than two pixels per acre. I agree with the need to keep files small with seven channels. My usual mode of operation is to track what I am doing on small files in Daystar's Photomatic scripting addition to Photoshop, then run it overnight, mistakes and all, on the higher resolution files. So that could be another contest variant: best Photomatic script. It would make for a better final educational display.
Also, I am a little concerned over just a cubic spline conversion of slant range to ground range. It seems to me that this does not take into account ground topography. A terrain model could be constructed from a DEM for the particular perspective that the radar saw and the ground pixels adjusted according to their relative aspect. Has JPL considered this? Or are we awaiting better DEMs?....tom @ emerald 4/24/95
58-91)It is a little complicated with clickable image maps, as you have to set things up in the cgi directory, but if a user can supply me all the required files (ie. the coordinate file), I already have the imagemap program at the cgi directory - check out the on-line documentation at image mapping at ncsa
2) We are planning on a press release for the full size Sunbury image, it will be released soon.
3)Most SIR-C images have not been terrain corrected. However, the processing team has recently added that capability, and if a DEM exists, it is now possible to do the terrain corrections.
4/25/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov 58-10 Just curious, after spending all weekend playing with this, why Sunbury PA? 5/22/95Mark Lucas, Harris Corporation, mlucas@alcatraz.ess.harris.com 58-11 We chose Sunbury PA, for a couple of reasons, none of them very significant. It is an interesting area geologically, but it also has forests and urban areas. It is in a relatively well populated area, so that potentially some people that live nearby could add their impressions. or, people could drive by some day just to see what it looks like. The data had just been processed, and was conveniently available. Tony Freeman was the keynote speaker last week at a GIS conference in PA, and he showed this image. That is about it. 5/23/95Bruce chapman, jpl, bruce.chapman@jpl.nasa.gov QUESTION59You may wish to consider Question 24, Responses 2 and 3. These propose corrections to raw radar data that possibly should be applied before despeckling. JPL has not yet mustered a response to these suggestions so their merits are unknown.
tom / Emerald Imagery: tingalsb@oregon.uoregon.edu 4/19/95
59-2 The images you want are called Single Look Complex (SLC) Images. They are usually available from the same places as the filtered (multi-looked) images. Personally I use ERS-1 data. To order ERS-1 data contact: Radarsat International ERS Order Desk 3851 Shell Road Richmond, B.C. V6S 2W2 Canada (604) 244-0400 (604) 244-0404 (fax) 4/21/95Ian McLeod, UBC, ianmc@ee.ubc.ca 59-3 Ian, what software are you recommending for analyzing Single Look Complex data from ERS-1? What are the advantages of ERS-1 data? I am not going to order data over the telephone -- does this place [Radarsat International ERS Order Desk] in B.C. not have a home page? I see where there is a home page for ERS-2 ..... tom @ emerald email: tingalsb@oregon.uoregon.edu (tom pringle) 4/24/95 59-4 SIR-C data is available in the SLC format, and may be obtained through the outreach program. 4/25/95Bruce Chapman, jpl, bruce.chapman@jpl.nasa.gov 59-5 What type of filter are you testing? We have tested an adaptive filter (FGAMMA filter, Lopes et al. 1993) on SIR-C SLC data of an gricultural area. The filter did a wonderful job. It preserved the edges, strong scatterers, and linear features. We are interested to test the performance of other adaptive filter - if somebody have suggestion... 4/26/95Yves Crevier, Canada Centre for Remote sensing, crevier@ccrs.emr.ca 59-6 I am just browsing past - but cannot resist the temptation to make a plug. Why don't you process your own data from e.g. ERS-1 raw data on Exabyte? We market a comprehensive space based SAR processing package, at very reasonable licensing rates, running on Sun and Silicon Graphics platforms - does a great job for strip mode interferograms; precision calibration; precison focusing; georereferencing etc. I have forgotten my email address, but fax/phone me for further details if you are interested. Incidentally, the best (i mean stunning) speckle reduction algorithm that I have seen used some form of simulated annealing technique. 4/26/95Andy Smith, Phoenix Systems UK tel/fax (181)-549-8878, Forgotten it! QUESTION57tom / Emerald Imagery: tingalsb@oregon.uoregon.edu 4/19/95
QUESTION54This came up before but the response got moved over to the archive. What exactly is the radar-on-a-chip good for ? Does it have to do with remote sensing or making your own highway speed trap ?
tom@emerald 5/ 8/95
65-2 Soon, the archive of old messages will be on-line.Cascade Head is a coastal headland in Oregon, comprising part of the World Biosphere Reserve program. It is owned on the south by the Nature Conservancy. The conservation issues are a federally threatened fritillary [which feeds exclusively on a violet growing only under certain soil depth and moisture conditions], a rare endemic plant [the Cascade Head silene], and an endangered grassland community type menaced by succession. There is a need to know slope-acres of habitat accurately, as opposed to sea level projected-acres.
The northern half is remnant old-growth Sitka spruce forest, managed by the US Forest Service largely as a Research Natural Area and marine mammal sanctuary. There has not been funds to do more than an initial establishment report. GIS work has been held back by lack of a good digital elevation model of the rugged and slippery terrain.
The adjacent Salmon River estuary has been the focus of a major salt marsh restoration project to benefit fall chinook salmon. Here subtle elevation differences are crucial in the berm-breaching endeavor, to determine the extent of tidal influences on the future plant community.
Radar imaging could offer some unique benefits to the managment of all three areas because heavy coastal fog and clouds prevents dependable aerial photo imaging. The acreage is too small for useful Landsat or Spot pixels.
As an experiment, Emerald Imagery has volunteered to construct and initially maintain a multi-agency Web GIS page for this site, serving up custom maps in response to project-specific GIS queries. [This is essentially a micro-version of Canada's National Atlas.] This approach is needed because it is not cost-effective to train or equip remote field staffs in high-end techno-rubble. Hopefully, the three agencies will also cooperate more effectively, using the Web page as the common repository for submitted mapping data.
I can't promise any short-term glory in this for JPL/NASA but I can say it is darned good project for potentially displaying the capabilities offered by the radar imaging program. tom @ emerald email: tingalsb@oregon.uoregon.edu 5/10/95
67-3 I see now where I should have directed my project plug directly to the outreach channel JPL has kindly provided. They are in need of a slick Web form page to make requests easier. There are a couple of sites I need too, including the Gila River/ Black Canyon riparian areas [beaver in the past???] in notorious Catron County, New Mexico [unsafe to visit except by remote sensing!] and the Soda Mt. Wilderness proposal, a pain as far as USGS maps are concerned because it straddles the Oregon/California border. I suppose when each mission has a clickable map of coverage showing processed and unprocessed data it would make the process easier on the outreach program. tom @ emerald 5/12/95 QUESTION68Why not use a more tasteful and visually interesting fabric or texture supplied free by Netscape Comm. Corp., supplemented by a subdued JPL watermark on each page, the way Delft does? Finally, are you aware that green and all its variants are copyrighted by Emerald Imagery for exclusive use? tom @ emerald 5/22/95
70-2 I, too, agree that the green is a little bright. The Coast Guard does a nice watermark, too: http://www.webcom.com/~d13www/welcome.html. 6/ 1/95Tom Farr, JPL 70-3 test 6/ 2/95 70-4 I would like some soothing background music too, like the jazz they play through RealAudio freeware at Emerald Consulting's new site. You can continue to surf, or even quit Netscape, without disrupting the music stream or slowing down that much. Anon. 6/ 5/95 70-5 New embossed 'JPL' background is classy. It would look good on all the pages. tom @ emerald 6/11/95 70-6 As the above message indicates, I have changed the background to an embossed JPL logo. I hope everyone likes it - I would like to see more identity to the imaging radar program, not just jpl, and maybe change text color so that text doesn't get confused by the background. 6/13/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov 70-7 ok, I'm happy now with the jpl radar background 6/13/95bruce chapman, jpl, bruce.chapman@jpl.nasa.gov QUESTION72I am very interested in this as well. In many US counties the Soil Conservation has already published soil polygons over b/w orthophotos. The soil data is ground-truthed to some extent but the polygons were drawn for the most part over the photos. An error of 15-20% by area [due to unmappable inclusions] was considered acceptable. Often someone has digitized these as well. Thus there are ample"training" regions for teaching radar to recognize soil types.
However, in my opinion, automated radar soil classification won't correlate very well with SCS polygons. This is because soil taxonomy doesn't define categories or distinctions that are necessarily consistent physically. I question whether if you walked into an SCS office with a 6' core with what probability it would be correctly identified as to polygon type.
Now radar should be able to do something on soils and at far better resolution. And robably something more real than SCS soil types. But radar polygons will probably have to be field-interpreted from scratch. tom @ emerald 6/ 5/95
QUESTION74