Hello folks, Since I've gone to the trouble of setting up this mailing list of Radiance users, I might as well use it for something. (By the way, feel free to send mail directly to the group yourself -- it's ray@hobbes.lbl.gov.) This is the first edition of a digest of correspondence between myself and other program users. I just take what I think might be interesting to the general populace, not everthing. (You're welcome.) If you send me mail that you don't want shared in this fashion, please tell me so at the time. I will start with mail I received (or sent) this year -- I think the older stuff is too out of date now, anyway. In this digest, you will find discussions on the following topics: New options and programs (-p, -z, pinterp) Penumbras and source sampling Modeling software Radiance display drivers Pfilt and ies2rad light sources Modifiying source for huge scenes Radiance course possibility -------------- From greg Thu Jan 11 12:18:04 1990 Date: Thu, 11 Jan 90 12:17:59 PST Subject: new programs and options Dear Radiance users, There is a new rview driver for X11 (written by Anat Gryberg) and a new program for interpolating new views from images, called pinterp. There is also a new option for rpict and pfilt to set the pixel aspect ratio of the output picture, and rview no longer takes x and y resolution arguments. Under X10 at least, you will notice that rview now responds to resize requests. Here are some release notes on the changes: Added a -p option to rpict and pfilt to set the pixel aspect ratio for output. Instead of giving the absolute x and y resolutions, the user now gives rpict and pfilt the maximum desired resolution for x and y and the pixel aspect ratio is used along with the given view to calculate appropriate values within this boundary. This makes for much more natural view specifications. For example, for a 512x400 device with a pixel aspect ratio of 1.0, the pfilt command: pfilt -x 512 -y 400 -p 1 will always produce the appropriate output, regardless of the aspect ratio of the input picture. If necessary, the x or y output resolution will be reduced to accommodate the device's resolution. A square image would occupy a region of 400x400 pixels. View shift and lift options were added to the list of standard view parameters, for specifying views for panoramas and holograms. Rview no longer takes options for x and y resolution, but instead gets them from the device driver along with the pixel ratio. This makes it much easier to change the view aspect ratio (with the vh and vv parameters). A -z option was added to rpict to write out the distances for each pixel in an image. This may be useful for z-buffer operations, and is used by the new program pinterp, described below. A program called pinterp, was added to the burgeoning list of picture filters and converters. This program is designed primarily to interpolate animated frames for walk-throughs of static scenes, but it has a number of other useful functions besides. It takes as its input one or more rendered pictures (with their corresponding z value files) and desired viewpoint (hopefully not too far afield from the given images). Pinterp then takes the input frames and moves the pixels to their new computed location. Filling operations make sure that the final image does not have large unpainted regions. -Greg From greg Tue Jan 16 08:48:57 1990 Date: Tue, 16 Jan 90 08:48:48 PST To: dasilva%ced.Berkeley.EDU@jade.berkeley.edu Subject: Re: recovering rpicts Hello Deanan, The -x and -y options have not been replaced in rpict, -vs and -vl are new options. There is another new option, -p, which you will need to set to 0 to get your recovery operation to work. Simply add -p 0 to every rpict command in your makefile. This wouldn't normally be required, except that you are recovering files you created with the old rpict, which didn't pay attention to pixel aspect ratios. Read the new manual page for rpict to understand what I'm talking about. The -z option of rpict is easy to use. Simply give it the name of the file where you want to store the z buffer information, then stand back! The z-file can be quite large, since it stores 4 bytes for every pixel. For a 512x512 image, that's already 1 megabyte. You don't need to do anything special to recover z-file information, just use the option as you did in the aborted rendering. Good luck, and let me know if you have any other questions. -Greg ----------- From greg Fri Jan 19 11:36:32 1990 Date: Fri, 19 Jan 90 11:36:25 PST To: anderdla@cs.uoregon.edu Subject: penumbras Dear Darren, Thank you for your letter. You don't need to change your scene description at all to generate penumbras, only the options to rpict (or rview). Set -dj to some value less than 1. For most scenes, a value of .5 will do nicely. If the value is too close to 1, strange things may happen for irregularly shaped light sources (see the BUGS section of the rpict manual page). Spherical light sources work best, but rings and nearly square polygons work also as area light sources. Note that if your light source is small in relation to the ratio of the obstruction- shadow vs source-shadow distance, then the penumbras will not be very pronounced (ie. the shadows will be sharp). I am glad to hear that you are using the software! -Greg From greg Fri Jan 19 11:39:57 1990 To: anderdla@cs.uoregon.edu There is one other thing I should mention. When you generate penumbras, image sampling may start to cause problems. You may need to set the -sp option to 1, which will result in longer rendering times. This is why direct jitter is not turned on normally (rpict -defaults). -Greg From anderdla@cs.uoregon.edu Mon, 22 Jan 90 13:07:12 PST To: greg@hobbes.lbl.gov Subject: More questions! Well, thanks for telling me about -dj! There is a real nasty side effect,though. It seems the as I increase shadow jitter, the picture becomes increasingly grainy. I have tried changing the -sj, and -sp parameters to overcome this. I have also tried using pfilt to no avail. Do you know how to overcome this problem?? Thanks. Darren Anderson From greg Mon Jan 22 13:26:05 1990 To: anderdla@cs.uoregon.edu Subject: Re: More questions! Are your light sources very long and narrow? This can cause big problems for the -sj algorithm. You must break up any long sources into smaller, more square pieces. Even if your light sources have an aspect ratio around 1 (ideal), you still should not use a value for -dj greater than about .7 . A modest amount of graininess is to be expected of any Monte Carlo sampling technique. In this case, it is caused by the angle between the surface and the light source direction varying with the source sampling. To reduce the amount of graniness, either lower -dj (cheap) or raise the resolution for rpict and use pfilt to antialias down to the final picture resolution (accurate): rpict -dj .5 -sp 1 -x 1024 -y 1024 octree > rpict.pic pfilt -x 512 -y 512 -r .7 rpict.pic > pfilt.pic The -r option of pfilt uses a Gaussian filter, which looks slightly better than the default box filter. If you have adjusted your light sources to give you the picture brightness you want straight out of rpict, you can use the -1 option of pfilt to speed it up. If you don't want to produce an intermediate file (rpict.pic), you can pipe the output of rpict directly into pfilt. -Greg ----------- From hchen@gumbo.age.lsu.edu Tue Apr 10 07:34:56 1990 To: gjward@Csa1.lbl.gov Subject: CAD programs Dear Mr. Ward, 1. We read your RADIANCE Tutorial within RADIANCE package, it says that the input model may contain many thousands of surfaces, and is often produced by a separate CAD program'. We would like to know what kind of CAD programs can be used in this situation. Can we use AutoCAD as a mean to produce a model? If so, how? 2. I try to run the examples under examples/conf subdirectory using make command. It give me the error message of "chair1.oct not found". The original Makefile is as: # # Makefile for the conference room # VIEW = -vf vf/current SCENE = test #DEV = X DEV = sundev AMB = -av .02 .02 .02 OCTOPTS = -f view: $(SCENE).oct rview $(VIEW) -o $(DEV) $(AMB) $(SCENE).oct Actually, octree file 'chair.oct' sits under current directory. I don't know why the programs couldn't find it. Thank you for your help. Huaiming Chen From greg Tue Apr 10 12:04:25 1990 To: hchen@gumbo.age.lsu.edu Subject: Re: CAD programs The conference model is probably not working because you haven't set your RAYPATH variable to include the current directory ".". Radiance uses this environment variable to determine where to look for auxiliary files (incl. instance octrees). The default value is ".:/usr/local/lib/ray", which includes the current working directory. There is currently no translator from AutoCAD, but we expect to have one sometime in the near future. For it to work, the model would have to have been created with surfaces, rather than lines. We have a translator for GDS (from McDonnell Douglas) and may have one for MacArchitrion soon as well. Right now, genbox, genrev and gensurf are the most useful surface description generators (oh, not to forget genprism, one of my personal favorites). -Greg ---------- From mb@cs.albany.edu Thu Jun 21 12:50:30 1990 To: GJWard@Csa1.lbl.gov Subject: RADIANCE Dear Mr Ward, We have a network of sun3's and sun4's running sunos 4.0.3. We just installed RADIANCE on both architectures but we're having some problems getting it to run. The installation process seemed fairly simple - a matter of placing binaries and library files in the right places, and so we did not recompile anything. Following the tutorial given we get error messages when tyring to invoke rview. These are the messsages: rview: cannot open X-windows; DISPLAY variable set? rview: fatal - cannot initalize X The display variable is set to unix:0.0 for any user, it was not clear that this had to be changed in any way. And these messages occurred while in X (we run XllR4). If you could give some help in this matter we would appreciate it, as we would very much like to use this software. Thank you, Michele Buselli State University of New York - Albany (518) 442-4279 ...some back and forth, then: From greg Wed Jun 27 11:23:12 1990 To: mb@cs.albany.edu Subject: Re: RADIANCE Michele, Since you are already linking the X11 driver into rview directly, there is no need to compile the separate driver program x11dev. Just remove it from the DRIVERS definition in your Makefile. (If you were going to build x11dev, you would have to add one more special compile similar to that for x10.o, but as I said, it would be redundant in your case.) If you don't use X10 at all, you should remove the line for x10dev from devtable.c. Rview will still compile with it in, but without building x10dev, this driver would not function. Perhaps I should better explain how drivers work in rview. A driver is an interface to the rview program that provides a few basic graphics input and output functions, which are described in driver.h in some detail. There are two basic driver types, drivers that are linked to rview directly, and standalone programs that talk to rview via a pair of UNIX pipes. Due to efficiency considerations, linked drivers are usually preferred, but there are a few reasons for having standalone drivers instead: 1) The libraries used by the driver are incompatible with other program requirements or drivers. (Eg. sunview libraries prevent the use of UNIX signal facilities, and X10 and X11 calls interfere.) 2) The libraries are only supported on certain machines. 3) The driver's libraries result in a huge program. (Eg. when I attempted to link rview to sunview in the past, the compiled program quadrupled in size!) 4) Standalone drivers can be compiled without changing any of the code for rview, thus avoiding the need for source recompilation. In your case, you will still need to compile sundev as a standalone driver, but you can link to x11 directly (as your Makefile does already). Compile x10dev only if you are still using X10 on some machines. -Greg From mb@cs.albany.edu Wed Jun 27 14:54:11 1990 To: greg@hobbes.lbl.gov Subject: RADIANCE Hi Greg, Thanks very much. The Makefile compiled just fine. I'm in the process of looking through the rest to see if I need to make any changes. At first glance, there doesn't seem to be a need for this. Just out of curiosity, what kind of environment do you run Radiance under? Michele From greg Wed Jun 27 15:05:57 1990 To: mb@cs.albany.edu Subject: Re: RADIANCE Hi Michele, The environment here is sunny most of the summer, although we do get some fog in the mornings (which is nice because things cool down then). Oh -- I guess you mean what kind of computer environment, huh? I have a single Sun-3/60 running SunOS 3.5 and X10R4, and it hasn't changed much in the two years since I bought it. We recently received a grant from Apple and have been running the programs on a MacIntosh IIcx running A/UX 1.1.1 and X11R3. (We have just ordered A/UX 2.0.) The architecture department at UCB, which has been using Radiance quite a bit, is running mostly Sun computers, although they were recently given about 10 Silicon Graphics IRIS workstations and I am in the process of getting drivers up on those machines. I don't use sunview much myself, though I have easy access to it. By far the environment Radiance has been used and tested in most heavily is Sun-3's running SunOS 3.5 or 4.0 and X10. I wish I had better contact with people using the software on different systems, so I could incorporate their modifications and additions back into the distributed code, but I don't communicate much with folks outside of UCB. -Greg ---------- From emo@cica.indiana.edu Wed Aug 15 07:41:29 1990 To: greg@hobbes.lbl.gov Subject: why luminance intensities so low??? Why is it the case that the radiance values output from ies2rad seem to be woefully low? It's not unusual for the initial values to have to be increased by factors of 3-10. Is there some trick I'm missing that can be played with the '-dX' option to ies2rad? For instance, if one were to set up an actual IES lighting device 10 feet from a white wall the visual impact of that illumination is much more profound than that obtained by simulating the same IES light source in Radiance projected onto a white wall 10' away. Any clues/suggestions? eric From greg Wed Aug 15 07:54:00 1990 To: emo@cica.indiana.edu Subject: Re: why luminance intensities so low??? Hi Eric, Thanks for spotting the inconsistency in func.c! I will fix it on future distributions. (I think only one other went out with the wrong version.) As far as the low light levels are concerned, you must specify the same units to ies2rad with the -dX option as you are using in your scene. Other than that, you must also realize that the image you get from rpict is not exposure-adjusted, and you will probably have to use pfilt to get a nice picture. The pixel values in the file correspond to radiance, which is not always in the right range for display. Pfilt fixes that. -Greg From emo@cica.indiana.edu Wed Aug 15 09:06:00 1990 To: greg@hobbes.lbl.gov Subject: using pfilt Could you send me a bit more info on using 'pfilt' to obtain an exposure-adjusted image? Is the 'one-pass' option better in this regard? What about using the other 'filtering' functions? eric From greg Wed Aug 15 18:22:53 1990 To: emo@cica.indiana.edu Subject: Re: using pfilt Pfilt without any options just does an automatic exposure adjustment. The -1 option is faster, but only works if you know already what exposure to set. If you had run pfilt before, then getinfo printed a line from the final picture saying: EXPOSURE=3.52 then you could run pfilt -1 -e 3.52 the next time and get the same picture a little bit faster. The other options are for anti-aliasing and rely on turning a big, high-resolution picture into a smaller, anti-aliased picture. The -r option (with a value of .6 or so) produces a nicer image at a slightly higher processing cost. -Greg --------- Modifiying source for huge scenes From emo@cica.indiana.edu Sun Sep 2 14:55:54 1990 To: greg@hobbes.lbl.gov Subject: large polyhedra Greetings Greg. Another of the projects I am working on involves some astronomers who produce large-ish data sets, on the order of 65x65x15 (symmetric about x-z plane). We then use a modified marching-cubes 3D contouring algorithm to resolve certain polyhedra of interest, e.g. surfaces w/ specific gas density. These polyhedra are sometimes composed of 40,000+ discrete polygons (actually, triangles). Converted to Radiance 'polygon' format, this file becomes ~9+ Mb and 'oconv' crashes when producing the .oct file, indicating that it is out of 'object space'. Thus, I have two questions: 1. by modifying MAXOBJBLK in object.h and recompiling 'oconv' can I increase the available space for object storage and thereby permit the loading of my 9+ Mb polyhedra specification? 2. alternatively, I would like to simply be able to load a .geom file, composed of a vertex table and edge list for each discrete polygon. Looking at the code in 'readobj.c', it's not clear how I can go about implementing the code to interface to this new kind of 'object'. What would you suggest? As always, thanks for the support! eric From emo@cica.indiana.edu Sun Sep 2 14:58:24 1990 To: greg@hobbes.lbl.gov Subject: polyhedra size I just discovered that some of the contour surface have upwards of 80,000 polygons in their specification. One more question: when approaching 100K polygons, am I going to run into performance bottlenecks in Radiance's implementation. In other words, is it realistic to expect rapid renderings, esp. using 'rview', of such large polyhedra? Thanks. eric From greg Sun Sep 2 16:16:10 1990 To: emo@cica.indiana.edu Subject: Re: polyhedra size Hi Eric, I was wondering when someone would want to start working with huge models. The main concern is, do you have enough memory? The performance of oconv O(N), meaning 100,000 surfaces should take 100 longer than 1000 surfaces to convert. That is actually going to be your main cost in terms of time. Rview and rpict have an O(n^.33) intersection algorithm, so 100,000 surfaces in general will take roughly 4.5 times longer than 1000 surfaces. I don't recommend implementing a vertex sharing polygon structure. I have considered such a model, and it doesn't save much space -- especially for triangles. You are better off just changing the definitions as you suggested. Besides increasing MAXOBJBLK in object.h (you might try 2047 or 4095 to start), you will have to change the type of OBJECT from short to int (or long). Also, you will probably need to increase MAXOBLK in octree.h to 8191 or more or you will run out of octree space when running oconv. Also, for better performance, you should probably increase OSTSIZ in objset.c a like amount, using a prime number. (I suggest you start with 12329.) I hope you have done a "back of the envelope" calculation to figure out how much memory this is all going to take. You may find yourself in over your head in a hurry. For example, I have 16M on my machine, and it starts to choke on models of around 18,000 surfaces. Let me know how it goes and if you run into any other errors. -Greg ------------- From arthur@abies.cfnr.colostate.edu Mon Oct 8 23:04:48 1990 To: GJWard@Csa1.lbl.gov Subject: RADIANCE Hello Greg; If you recall, I first made contact with you last winter. I have recently attempted to tackle RADIANCE again. The Tutorial available in the updated version gave me hope! I do find it very cryptic, however. Do you offer short courses in RADIANCE ? I would gladly fly out for instruction. I am unable to progress rapidly enough. ANy suggestions? D. Arthur Sampson Dept. Forest and Wood Sciences Colorado State University Ph.D. Student From greg Tue Oct 9 10:45:02 1990 To: arthur@abies.cfnr.colostate.edu Subject: Re: RADIANCE We are going to have a meeting on Radiance and Superlite (a daylighting analysis program) this January at LBL, and might be able to work a short tutorial into it. If that's too far away for you, and you have a little money to spend, I might be able to recommend someone who has an excellent background in using the software to serve up some private lessons. -Greg From arthur@abies.CFNR.ColoState.EDU Tue Oct 9 11:53:43 1990 To: greg@hobbes.lbl.gov Subject: RADIANCE meeting Hi; I would be interested in seeing an agenda for the meeting, or if a Tutorial is appropriate for my request (Would I be welcome in the meeting?), that would work nicely. Also, of interest now is this Superlite program you mentioned. Arthur ------------ End of Radiance Digest v1n1 Let me know if this has been useful to you. It is not my intention to flood people's boxes with unread mail. -Greg ~s Radiance Digest, v1n2 Hello Everyone, Sorry that it's been so long since my last digest mailing. Rather a lot of mail has piled up. I've keyed the subjects so that you can skip to the one you're interested in more quickly. Just search for /^PAT/, where PAT is a topic key from the list below. You can skip to the next section with /^==/. The topics in this issue are as follows: LIB Setting up the library path variable RAYPATH OCONV Oconv parameters and errors PART Partitioned ray tracing using -vs and -vl ASPECT Aspect ratios and pfilt options LUM Computing luminance and daylight factors SIG Questions about `88 Siggraph paper COLOR Dealing with different color formats RPICT Rpict options OUT Using Radiance to compute a simple outdoor scene ARCH Architrion file translator ALG Secondary source calculations and new algorithms -Greg ====================================================================== LIB Setting up the library path variable RAYPATH Date: Thu, 18 Oct 90 15:58:35 -0400 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: RADIANCE question I'm working through the RADIANCE tutorial (got as far as adding the window to the sample room before, have some time to go farther today for some reason) and I have run into a problem. I'm at the point in the tutorial where I've done: $ oconv sky.rad outside.rad window.rad room.rad > test.oct and am generating a picture with: $ rpict -vp 2.25 0.375 1.5 -vd -0.25 0.125 -0.125 -av 0.5 0.5 0.5 test.oct > test.pic and it gives me: rpict: fatal - cannot find function file "rayinit.cal" rpict: 30296 rays, 49.22% done after 0.0141 CPU hours after working a bit (two, three minutes on a Sun 4/110). Any idea(s) as to why this is dying? All my previous images have been generated without trouble. thanks... steve spencer ps: I'm reasonably certain that I have entered all of the data files from the tutorial document correctly. Date: Thu, 18 Oct 90 13:14:31 PDT From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: RADIANCE question Hi Steve, The problem appears to be that your Radiance library files are not in their standard location, which is "/usr/local/lib/ray". In the README file in the distribution it describes where to put things. The files in ray/lib should go in the library location. It's OK not to put them there, but you need to assign the environment variable RAYPATH if they're somewhere else. For example, if the Radiance distribution is sitting in /usr/local/src/ray, then you can put the following in your .login: setenv RAYPATH .:/usr/local/src/ray/lib and everything should work. Note that RAYPATH is like the Bourne shell's PATH variable, since it can take any number of directories to search in order. Typically, you want to have "." first, and you may have your own private library to use before the system directory, plus someone else's library at the end. Good luck. Let me know if you have any more problems. -Greg ====================================================================== OCONV Oconv parameters and errors Date: Wed, 24 Oct 90 14:54:25 EST From: Eric Ost To: greg@hobbes.lbl.gov Subject: oconv parameter question Greetings Greg, This message concerns parameters to 'oconv'. In particular the '-n' and '-r' options. I think I mentioned that we have modeled the new Computer Science building from the blueprints and I have begun building Radiance versions of all the geometry files. During the course of converting the polygonal data files into octtree format files I ran into the following error message: oconv: internal - set overflow in addobject (TOP.187) I traced its source to 'oconv.c'. The manual page for 'oconv' states that by increasing the setlimit ('-n') and resolution ('-r') parameters this message may be avoided. The file f3n.rad (the 3rd floor geometry) only contains 1408 discrete polygons, yet even when I use 'oconv' as: oconv -n 10000 -r 16384 -f f3n.rad > f3.oct I still get the 'set overflow' error message. I also have tried scaling the entire geometry up by a factor of 10.0, which increases the inter-object spacing. Even so, the error still occurs. Do you have any ideas? BTW: I can send you the geometry file if you wish. Thanks. eric Date: Wed, 24 Oct 90 13:09:32 PDT From: greg (Gregory J. Ward) To: emo@cica.indiana.edu Subject: Re: oconv parameter question First, increasing n cannot eliminate this error. I checked the manual page, and it doesn't say to increase -n when you encounter this message. Since -r doesn't seem to help, though, I would guess that you have many surfaces (more than 128) piled on top of each other causing the error. You must eliminate this problem in the scene description first. (Perhaps it is due to a translator bug.) -Greg Date: Wed, 24 Oct 90 15:20:26 EST From: Eric Ost To: greg@hobbes.lbl.gov Subject: Re: oconv parameter question You're correct about the manual page not mentioning an increase in '-n'. Sorry for the confusion. This could be a bug in the translator. I am going to re-check the code I wrote yesterday. It is a program to translate from WaveFront .obj to Radiance .rad format. Pretty simple, but it's possible that I have an 'off-by-one' error... though, the first floor looked ok when I rendered it. The output file consists of triangles only. I am using a simple method of splitting rectangles, etc., into several triangles. What does it 'mean' for two objects to be piled on top of one another? More than two objects sharing an edge in common? Thanks. eric Date: Wed, 24 Oct 90 13:31:55 PDT From: greg (Gregory J. Ward) To: emo@cica.indiana.edu Subject: Re: oconv parameter question If you are willing to share your translator, I'd love to distribute it! By "piled up" I mean coincident. In other words, overlapping coplanar polygons. I have seen this error before and it almost always comes from this type of problem. Date: Wed, 24 Oct 90 16:16:47 EST From: Eric Ost To: greg@hobbes.lbl.gov Subject: partitioned ray-tracing I started compiling Radiance on our Stardent but ran into a nasty NFS bug which caused the machine to actually crash. I'll probably get back to it after I finish with this next batch of client requests. My, perhaps naive, idea was to sub-divide the image into equal areas per-processor and let it run. For example, with a 4 processor machine, a 1024x1024 image would be split into 4 256x256 sub-images. What kind of speed-up could we expect? 4 times? And, is Radiance amenable to this kind of modification of its run-time architecture? eric ====================================================================== PART Partitioned ray tracing using -vs and -vl Date: Wed, 24 Oct 90 14:39:48 PDT From: greg (Gregory J. Ward) To: emo@ogre.cica.indiana.edu Subject: Re: partitioned ray-tracing Yes, 4-way image partitioning yields a 4 times speed improvement, and yes, Radiance is amenable to such a procedure. The only time this would not result in a linear speedup is if you were using the indirect (ambient) calculation capability, which is a global calculation. To split up your rendering into 4 pieces, you should determine the view you want, then reduce the horizontal and vertical size (-vh and -vv) by two. (For perspective views, this means: newsize = 2*atan(tan(oldsize/2)/2).) Then, use the -vs and -vl parameters like so: Upper left: rpict -vs -.5 -vl .5 -x 512 -y 512 [args] > ul.pic Upper right: rpict -vs .5 -vl .5 -x 512 -y 512 [args] > ur.pic Lower left: rpict -vs -.5 -vl -.5 -x 512 -y 512 [args] > ll.pic Lower right: rpict -vs .5 -vl -.5 -x 512 -y 512 [args] > lr.pic Then combine the images with pcompos thusly: pcompos ul.pic 0 512 ur.pic 512 512 ll.pic 0 0 lr.pic 512 0 > full.pic Note that a 1024x1024 images splits into four 512x512 images, not four 256x256 ones. The reason for using -vs and -vl is to get the correct skewed perspective in each quadrant. These options were designed for creating panoramic views a piece at a time, as might be needed for a mural or something. -Greg ====================================================================== ASPECT Aspect ratios and pfilt options Date: Thu, 25 Oct 90 14:17:11 -0400 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: GJWard@Csa1.lbl.gov Subject: re: aspect ratios (again) Something's not right here. I'd like generate a 640 by 484 pixel image (ratio is 1.322314) with a 50-degree horizontal view angle. I used the following command (a fragment of it...): -x 640 -y 484 -vh 50.0 -vv 37.8125 The ratios 640/484 and 50.0/37.8125 are identical, 1.3223140495. The image produced is only 640 pixels by 470 pixels. Any idea where my last 14 scanlines went? Thanks. steve ps: oh, the output was the same whether I had "-p 1.0" in the command line or not. Date: Thu, 25 Oct 90 11:38:22 PDT From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: re: aspect ratios (again) The ratios between angles in a perspective view do not give the aspect ratio of the image, unfortunately. It all has to do with tangents. The actual aspect ratio is given by: view aspect = y/x = tan(vv/2)/tan(vh/2) This is a large part of why I introduced the -p option to the programs -- it was simply too difficult to do this calculation every time. Generally, you can give the maximum resolution in both x and y that you will allow, then it will fit an image with the proper aspect ratio within those bounds. If you must have absolute control over the x and y resolution, just give a -p of 0, and make sure you know what you're doing. In any case, getinfo will report the PIXASPECT of the image if it differs from 1. -Greg Date: Wed, 31 Oct 90 08:35:32 -0500 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: re:RADIANCE Any quick answer to 'nice picture, Steve, but what about anti-aliasing'? I'm looking at the parameters and have a couple of ideas, but nothing is jumping out at me right now. (Given the date, that's probably best.) What parameters should be changed to anti-alias the image produced? (As if 'anti-alias' was a verb....jeez.) thanks... steve Date: Wed, 31 Oct 90 08:41:43 PST From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: re:RADIANCE Use pfilt for anti-aliasing (in computer science, we're verbing words all the time). First you should generate an image at higher resolution than you'll need for the final result. Then, reduce the resolution with something like: pfilt -x /2 -y /2 input > output For the best results, use an input resolution that is three times what you want in the final image, and employ the -r option of pfilt for Gaussian filtering: pfilt -x /3 -y /3 -r .67 input > output The argument to -r sets the amount of "defocusing", larger numbers resulting in less focused images. A value of .67 seems to be about optimal for most purposes. (Note that the -r option takes somewhat longer than the default "box" filtering. Also, you can use the -1 option of pfilt to speed things up if you know how to adjust or have already adjusted the exposure.) -Greg Date: Wed, 31 Oct 90 12:04:21 -0500 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: re:RADIANCE Thanks. I guess I was looking in the rpict documentation instead of the pfilt documentation. steve ====================================================================== LUM Computing luminance and daylight factors Date: 6 December 90, 10:28:38 MEZ From: MICHAEL.SZERMAN.00497119703322.KKAC%ds0rus1i.bitnet@Csa3.lbl.gov To: GJWARD@Csa3.lbl.gov Hi Greg, Thanks a lot for your last information about RTRACE. Allthough it is running well now, one problem still remains. How do we have to convert the three radiance/irradiance values of the RTRACE-output to get final luminance/illuminance in cd/m2 ? Next we intend to make a parameter study on a simple scene. Therefore we need values of illuminance and daylight factor. Is it possible to receive the daylight factor directly from RADIANCE or must we calculate it from illuminance by hand ? Last we want to know, whether we have understood the following expression in the right way. You have said "an 88% transmittance glass window has a transmission of 96% ". Does this mean, that from the outside radiation 96% goes into the glass and 88% from the glass into the room ? If that's right, then we would like to know if the difference of 8% is dependent on your definition of the material glas or if we can choose it ourselves. Michael+Gernot Date: Thu, 6 Dec 90 08:58:19 PST From: greg (Gregory J. Ward) To: kkac@ds0rus1i.bitnet Hello Michael and Gernot, The conversion from spectral radiance or irradiance (rgb) to luminance or illuminance is: (.3*r + .59*g + .11*b) {watts} * 470 {lumens/watt} To get the daylight factor, you must divide by the external irradiance, which is pi times the "ground ambient level" printed by gensky. There is a program called rcalc that performs these types of calculations handily. For example, you can get luminance (or illuminance) from radiance (or irradiance) like so: getinfo - < rtrace.out | rcalc -e '$1=141*$1+277*$2+52*$3' > outputfile Note that you could have piped the output of rtrace directly into rcalc. Getinfo with the - option reads from standard input and gets rid of the first few info lines from the rtrace run. The -h option of rtrace does the same thing, but if you wish to keep the rtrace file, this information can come in handy. To compute the daylight factor for Feb. 10th at 14 standard time in San Francisco, gensky gives: # gensky 2 10 14 # Ground ambient level: 5.256781 (I trust you remember to use the -l, -a and -m options for your location.) We then use this value in rcalc to get daylight factors from irradiance like so: rtrace -h -i octree | rcalc -e '$1=(.3*$1+.59*$2+.11*$3)/(PI*5.257)' I hope this gives you enough information to do what you want to. As for glass, the 8% difference between 96% (transmission through the medium itself) and 88% (transmissivity) is due to reflection, and it varies with incident angle. This value is determined absolutely by the index of refraction, which for glass is 1.52. I debated about making N a parameter, but decided not to. If you want to change the index of refraction, you will need to go to two parallel but opposing dielectric surfaces. -Greg [I modified the numbers in this message as part of a correction for lumens/watt. -G] Date: 21 January 91, 11:41:01 MEZ From: MICHAEL.SZERMAN.00497119703322.KKAC%ds0rus1i.bitnet@Csa3.lbl.gov To: GJWARD@Csa3.lbl.gov Hi Greg, One new question about RADIANCE: We've heard that there could be difficulties by transforming computed luminance values into screen luminance, because in reality esometimes a greater absolute difference of luminance density, as a monitor is able to reproduce. We've got a publication, which shows that a combinated linear (for luminance less than 200 cd/m*2) and log grading (for luminance greater than 200 cd/m*2) transformation should be used. How does RADIANCE solve this problem ? And therefore, how realistic is the brightness of a RADIANCE picture? We would be glad for quick answer, because we need the information for a report in two days. Thanks a lot, Michael+Gernot. Date: Mon, 21 Jan 91 16:32:51 PST From: greg (Gregory J. Ward) To: kkac@ds0rus1i.bitnet Subject: display intensities Hi Michael and Gernot, I make no attempt in any of the Radiance display programs to map the monitor intensities to anything other than a linear scale. As far as I'm concerned, none of the arguments that has been given for using a logarithmic or a power law mapping of intensities is the least bit compelling. Radiance images and displays are similar to photographs in their relatively hard clipping of intensities outside the displayable range. If a spot is too bright, it comes out white. If it is too dark, it comes out black. Brightnesses inbetween have a brightness with a linear proportion to the actual computed brightnesses. I feel that the only way to get a better display is to increase the dynamic range. There are ways to compensate for lack of dynamic range in displayed images. Ximage and x11image have commands for displaying the numeric value over selected image locations, and this number is not limited by the display. Also, the user can readjust the exposure by picking a spot to normalize against and entering '='. This way, the viewer can dynamically adjust the displayed brightness range without having to rerun pfilt. I hope this has answered your question. I think that people may eventually agree on the best, most appropriate brightness mapping for displays, but the debate is still raging. -Greg P.S. I have tried logarithmic mappings, and I think they are much more difficult to interpret since contrast is lost in the image. ====================================================================== SIG Questions about `88 Siggraph paper From: ARIR@IBM.COM To: greg@hobbes.lbl.gov Subject: questions Greg, Could you answer some questions regarding the [`88 Siggraph] paper? Here they are... 1. Why is the correction term for surfaces in "front" not symmetric, i.e. why can we use a back light value to evaluate illuminance in front of it but not vice-versa? 2. Are secondary values also computed for higher generation sampling? 3. How many rays have you used to compute diffuse illuminance values (using the primary method)? 4. Is the simulation done in terms of actual physical units (could you define a light bulb of 60 Watts) or do you use just numbers that turn out right? 5. Are these superquadrics? Where did you get the textures? Hope you will find the time to answer.. Thanks, Ari. Date: Tue, 11 Dec 90 20:41:37 PST From: greg (Gregory J. Ward) To: ARIR@IBM.COM Subject: Re: questions Hi Ari, Sure, I'd be happy to answer your questions. One by one: 1. Why is the correction term for surfaces in "front" not symmetric, i.e. why can we use a back light value to evaluate illuminance in front of it but not vice-versa? A: The reason I allow surfaces in front of a value to make use of it is because the proximity calculated to other surfaces will consider things in front as obstructions. Therefore, it will be excluded by the other accuracy conditions if there is a problem. 2. Are secondary values also computed for higher generation sampling? A: If by secondary values you are referring to the interpolation method, the answer is yes. In fact, this is where interpolation really starts to pay off, since the number of rays would increase exponentially otherwise. When the first ray is cast, it spawns a few hundred interreflection samples as part of a new primary calculation. These in turn spawn more primary calculations at the next level and so on. But, since the results of each primary calculation are cached and reused for neighboring samples, the actual number of primary calculations is limited by the total surface area sampled. This fails to bring dramatic savings only when the geometric complexity is so great that most samples result in a new primary calculation (a dense forest scene for example). 3. How many rays have you used to compute diffuse illuminance values (using the primary method)? A: This parameter varies quite a bit from one scene to another. I cannot answer in general except to say that you must have enough rays in the initial sample to produce a value within your desired tolerance. For a simple rectangular space, you can get away with a hundred rays or so for 5-10% accuracy. A typical furnished office with daylight may take several hundred or even a thousand rays for the primary calculation. (This number can be reduced for subsequent reflections with no loss in accuracy.) 4. Is the simulation done in terms of actual physical units (could you define a light bulb of 60 Watts) or do you use just numbers that turn out right? A: Yes, physical units are used throughout. Total output is defined for light sources as well as directionality. I even have a program to import light fixture data from manufacturers. I am going to Lausanne to work on daylight simulation. It should be fun, except I don't speak a word of French! -Greg ====================================================================== COLOR Dealing with different color formats Date: Fri, 4 Jan 91 14:02:27 EST From: Ken Rossman To: greg@hobbes.lbl.gov (Gregory J. Ward) Yeah, I think twice now (well, once from the last distribution, several months ago, and once just yesterday). :-) I have a couple of questions and comments, though, since I have you "on the line" here: - I have yet to make rview work right on my display. I had thought it was having problems with my 24-bit display originally (I have a Sun-4/110 with a cg8 frame buffer, which is 24-bit color), but the same thing happens on 8-bit color displays. I tried 'rview'-ing some of the sample octrees that are included with the distribution, and I generally get just a square in the upper righthand corner of the window that is created by rview (using '-o x11'), and I see that it seems to be trying to resolve the picture in successive quandrants and all that, but that's about all I can tell of what it is doing. Sometimes I only see a completely white square, on other images I see a larger, gray one, but the result is never quite what I thought it should be. I take it that might mean I'm just not using the right defaults at runtime? If I give it no defaults on the command line, does it pick reasonable ones that should show me something of what is going on? - I tried out ra_pr24, and noticed that it has the same ailment as many other pieces of code I have played with over the years in conjuncture with this particular 24-bit display. A Sun 24-bit raster is stored in either XBGR order (for 32-bit rasters, where X is currently undefined), or in BGR order (for 24-bit rasters). In either case, though, most folks would expect the channels to appear in RGB order, but they are, instead, the reverse of that. When I view some of the images using ra_pr24, I get blues and reds reversed. - Just happened to notice in the man page for ra_bn, that in the text, it is referred to as ra_t16 instead. FYI, /Ken P.S. -- Thanks for this distribution!!! [Ken included my response in his following letter. -G] Date: Fri, 4 Jan 91 15:26:58 EST From: Ken Rossman To: greg@hobbes.lbl.gov (Gregory J. Ward) Hi Greg, There are probably one of two (or two of two) things wrong with running rview -- the exposure and the viewpoint. Neither one is set to a reasonable default, since this varies considerably from one scene to another. Well, that'll do it. I was just playing dumb and not setting these parameters at all when running rview before. I tried messing with the exposure a bit in rview, and that changes things a bit, but I know now that the viewpoint isn't right (I don't know how to properly change that around right now, because I don't know what relative units I am working with, and what values might be reasonable -- these must also be picture dependent). The view parameters (particularly -vp and -vd) should be set according to where you want to be looking in the scene. Right... by the way, what is the equivalent command (if there is one) in the interactive shell part of rview for this? There is often a view parameters file associated with an octree, which ends in a .vp suffix in general. You may give this to rview with the -vf option. The exposure is a problem that is easily solved after the rendering starts using the "exposure" command. Just type "e 1" once you are in rview and it does the rest. (It may be necessary to use ^R as well to redraw the screen and get a better color table allocation on an 8-bit display.) OK, I'll try all of those suggestions out. I'm really starting to use this software now, because I'm coming to the point where I will be having some real applications for it (I think I mentioned a long time ago that one thing I wanted to do was to do some *very* fancy presentation graphics (for things like slide shows), and while this package isn't really aimed at that kind of application, assuming I can get things like your 3D fonts working right, it sure could look good for this kind of application!)... I'm really disturbed if 24-bit rasterfiles are actually BGR instead of RGB! Sorry, but they area, in the case of Sun raster files. Sun defines a 24-bit and a 32-bit file format, and they are, as I said before BGR and XBGR order, respectively. I wrote the translator to hand files to another program, which must have this same confusion, since they work together so well. I guess the only solution is to add an option to ra_pr24 to tell it to put out RGB sequences instead of BGR (which should be the default if it is correct!). I had thought ra_pr24 was supposed to write out a standard Sun raster file. If that's the case, then you do need to write out the files in BGR or XBGR order. What other program did you expect ra_pr24 to "feed"? Thank you for spotting the problem in the ra_bn manual page. You're welcome. And as always, thanks for all your efforts on this software! /Ken Date: Fri, 4 Jan 91 15:44:03 EST From: Ken Rossman To: greg@hobbes.lbl.gov (Gregory J. Ward) That's got it! I issued the following command (as per some of your instructions in the previous message): rview -vf examp.vp -o x11 examp.oct and the thing fired right up, and is looking good! I'm curious, though. Does rview know it is working in the 24-bit domain, or does it operate in an 8-bit domain internally, and upward expand the resulting image for 24-bit planes? /Ken Date: Fri, 4 Jan 91 13:29:15 PST From: greg (Gregory J. Ward) To: ken@watsun.cc.columbia.edu The program I was feeding with ra_pr24 is yet another converter for a Kodak color printer. I don't have access to any 24-bit color Suns, so I've never tried ra_pr24 there. The internal color calculations are floating point. They use 4 bytes per color within the program, and are converted to 3 1-byte mantissas with a common exponent when written out to a Radiance picture file. In every case, there is better accuracy available than can be displayed on any existing hardware. The drivers for true color under X11 still need some work, since I have never gained access to a 24-bit machine running X11 and have not debugged the code for x11image or rview there. -Greg ====================================================================== RPICT Rpict options Date: Wed, 9 Jan 91 17:45:04 -0500 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: Radiance parameters Quick question: What parameters and values would YOU use to 'rpict' if you wanted to make an image which did a good bit of interreflection of light and distribute the light sources (give light sources some penumbra)? steve Date: Wed, 9 Jan 91 14:50:47 PST From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: Radiance parameters I'm not sure I can give you a good answer about parameters, since it depends a lot on the scene. Could you describe it for me briefly, including the number and types of light sources and if there is any daylight in the space? How long are you willing to wait? What machine are you using? Unfortunatly, setting the calculation paramters is more of an art than a science right now... -Greg Date: Wed, 9 Jan 91 17:56:39 -0500 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: Radiance parameters Assume an interior. No daylight. Small spherical light sources either recessed in the ceiling or in lamp(s) with shades (like a Luxo lamp though more crudely modeled, at least for now). Tables, chairs. Perhaps (later) a few area sources simulating fluorescent light boxes in the ceiling. (Jeez, I've just described my office. Oh well...) Is that enough description? steve Date: Wed, 9 Jan 91 15:19:45 PST From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: Radiance parameters Steve, For penumbras, make sure that your sources are round, or if they are polygons that they are roughly square. This ensures that sampling over their area will not result in inappropriate misses. (In future releases, you will be warned when this happens.) You should also turn off image sampling, since source sampling requires that every pixel be calculated. (You should probably render at a higher resolution, then reduce the picture size and anti-alias with pfilt.) Here are a reasonable set of rpict parameters for source sampling: -dj .5 # Direct jitter set to .5 (1 is maximum) -sp 1 # Turn off image plane sampling For interreflection, the first thing to remember is that you should save the "ambient" values in an overature run to populate the scene. I suggest the following parameters to rpict: -ab 1 # Calculate 1 interreflection bounce -ad 256 # Use 256 divisions in initial hemisphere sampling -as 128 # Sample 128 directions for variance reduction -aa .15 # Set the interreflection interpolation error tolerance -ar 10 # Rolloff accuracy at 1/10th global boundary scale -af ambfile # Save "ambient" values in a file -av v0 v0 v0 # YOU have to figure out a good value for v0! The way to figure out v0 is to run rview without setting -av first, then pick a point that's kind of in shadow, but not completely, and use the trace command to get the value. Use the same value for red green and blue, unless you want a colored ambient component. Run rpict at low resolution first as an overture to the main rendering, and discard the resulting picture like so: rpict -dj .5 -sp 1 -ab 1 -ad 256 -as 128 -aa .15 -ar 10 -af ambfile \ -av v0 v0 v0 -x 64 -y 64 octree >/dev/null This stores some ambient values in ambfile to populate the scene and improve the appearance of the final run: rpict -dj .5 -sp 1 -ab 1 -ad 256 -as 128 -aa .15 -ar 10 -af ambfile \ -av v0 v0 v0 -x 1024 -y 1024 octree >picture.raw Then, you can use pfilt to set the exposure, anti-alias and reduce the resolution: pfilt -x /2 -y /2 -r .67 picture.raw > picture.done Since rpict is likely to take a long time for such a rendering, I find it useful to have it write progress reports every hour to a separate error file using the following options: -t 3600 -e errfile That way, when you log out with rpict running in the background, you can always find out what's going on. -Greg Date: Wed, 9 Jan 91 18:21:43 -0500 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: Radiance parameters Wow. Thanks for the well-commented information. I'll let you know how it turns out. steve ====================================================================== OUT Using Radiance to compute a simple outdoor scene Date: Thu, 10 Jan 91 12:38:10 PST From: djones@awesome.berkeley.edu (David G. Jones) To: greg@hobbes.lbl.gov Subject: cloudy day A student wants to create an image of a sphere sitting on a plane as it would look on a very cloudy day. ie illuminated by a uniform hemi-sphere (the sky). He does not want to consider inter-reflections (so he doesn't need to use your ambient calculation) Is the only way of doing this to have very many point sources of light on a hemi-sphere, or does RADIANCE have another way to handle this? thanks, Dave P.S. He's a computer vision student and wants to compare the appearance of a scene under various lighting situations. Date: Thu, 10 Jan 91 14:31:19 PST From: greg (Gregory J. Ward) To: djones@awesome.berkeley.edu Subject: Re: cloudy day Hi Dave, You can use the -c option to gensky to produce a cloudy sky distribution, which is not the same as a uniform sky distribution. (Nature does not usually produce uniform skies.) He should use the interreflection calculation, though, since that's by far the best way to account for the sky's contribution. Try the following description to get started: # # A cloudy sky at 4pm (standard time) on March 13th # !gensky 3 13 16 -c # the glow type for the sky skyfunc glow skyglow 0 0 4 1 1 1 0 # the actual source for the sky dome skyglow source sky 0 0 4 0 0 1 180 # the glow type for the ground and its source skyfunc glow groundglow 0 0 4 1.3 .8 .6 0 groundglow source ground 0 0 4 0 0 -1 180 The parameters I recommend for rpict are as follows: set rparams="-ab 1 -ad 256 -aa .1 -av 4 4 4" Note that the value for -av is that suggested by gensky. Then, he should run an overture calculation to save up some ambient values in a file, like so: rpict $rparams -af ambfile -x 64 -y 64 octree >/dev/null The final rendering is done with the same parameters, but at a higher resolution: rpict $rparams -af ambfile -x 512 -y 512 octree > picture.raw -Greg ====================================================================== ARCH Architrion file translator Date: Wed, 13 Mar 91 16:24 MST From: JALove%uncamult.bitnet@csa3.lbl.gov Subject: Architrion Front-end for Radiance To: GJWard@csa3.lbl.gov Several months ago, you informed me that an Architrion interface was under development for Radiance. Is it completed? Are interfaces available other than the McDonnell-Douglas BDS-GDS? Thanks. Date: Thu, 14 Mar 91 08:25:05 +0100 From: greg (Greg Ward) To: JALove@uncamult.bitnet Subject: Re: Architrion Front-end for Radiance The interface currently works only for Architrion text files, which is the package we've been using on the MacIntosh where it runs. Did you have some other CAD program in mind? Jennifer Schuman (jennifer@hobbes.lbl.gov) is the one working on it, so you might want to ask her directly. It is in pretty good shape as far as I'm concerned. -Greg Date: Thu, 14 Mar 91 16:46:33 PST From: jennifer@hobbes.lbl.gov (Jennifer Schuman) To: JALove%uncamult.bitnet@csa3.lbl.gov Subject: Architrion-to-Radiance Greg and I are working together on an interface/translator/preprocessor to make Architrion files usable as Radiance scene descriptions. It is available to anyone who has the patience to work with it in its current "alpha" form. I'm not too eager to release it just yet (better to wait a few months), but if you're game then I'm willing. The details are as follows. Greg has written a translator called "arch2rad" which takes an architrion 3d text file and, together with a "mapping" file, creates a Radiance scene description. The mapping file is a text file which assigns materials to the surfaces created in Architrion. I have written an interface with HyperCard, to create this mapping file. The interface can also be used to create or modify materials. Users map materials onto surfaces by creating a series of rules based on Architrion object identifiers. For example, if I have some blocks called "floor" in Architrion, and have drawn some of them in red and some in blue, then I might make 2 mapping rules to tell Radiance that I want my material called "marble" applied to all red "floor" blocks and "carpet" applied to all blue "floor" blocks. It's pretty simple. It just requires a little forethought when creating the original Architrion model, since how you name, layer, color and place (i.e. position of reference face) blocks is important for locating materials. If you run the interface under A/UX, then it can also automatically generate the Radiance octree. If you are using the mac linked to a Unix machine, then you must do some ftp-ing of files back and forth. Arch2rad currently runs on the Unix side of this, while the interface is on the Mac finder side. This is a pretty inconvenient set-up at the moment, but I expect to have arch2rad running on the mac, from within the interface, very soon. So if you're eager for the interface, I'd highly recommend waiting at least until that task is finished. As for documentation, unfortunately all we have is Greg's brief write-up of the arch2rad procedure and my very messy scribbled notes. Since I am my own alpa tester for this (I'm currently doing a small design project with these new tools), I jot down notes as I go. There are little idiosyncracies in Architrion that are critical with respect to Radiance rendering, and I would like to have a small user's manual/handbook of "tips" for people working with the interface. Unfortunately, this project only gets worked on in my non- existant spare time, so who knows when we'll have a polished release version. I'm happy to send the software to you over the net. Or call if you'd like to chat about it. (415)486-4092 Jennifer Schuman ====================================================================== ALG Secondary source calculations and new algorithms Date: Mon, 1 Apr 91 21:14:57 PST From: chas@hobbes.lbl.gov (Charles Ehrlich) To: greg@hobbes.lbl.gov Subject: Another Rad Idea Greg, I've been cogitating on the "problem" of backward ray tracing, that being that only direct shadow rays to sources will be computed for illumination levels, except when ambient bounces are in effect. In other words, no light bouncing off mirrors will cast shadows, right? What the scenario in which the user has the option of defining any number of "virtual sources" by material name either on the command line or by another material definition that cross references to the specified materials as potential contributors to radiance levels. The thought here is that at every point in the scene, rays would be traced to each light source (statistics permitting) including each "virtual source." When the virtual direct ray hits its mark (only if, statistics permitting) it then traces X number of additional rays to the light sources, considers the material properties of the virtual source (like if it is highly polished metal) and then figures out any additional contribution the reflected images of sources should have on the original scene location. In this manner, it would seem, the effect of light bouncing off of mirrors could be acheived. Realizing that this method potentially greatly increases the number of shadow rays to be calculated, I've thought of one way that the virtual sources could be implemented with potential speed improvements. Perhaps in the definition of the virtual source, the material names of the sources to be considered as potential reflects could be named. Another possibility might be to somehow "project" all pertinent light sources onto the plane of the virtural source so that when a virtual source ray strikes, it already knows which other sources would possibly be "in view" of the scene location. Boy I hope this makes sense. I put a lot of thought into it because the project I'm currently working on has many, many mirrors in it with sources projecting directly onto them. Take care, Chas From greg Tue Apr 2 11:31:14 1991 Date: Tue, 2 Apr 91 11:31:08 +0200 From: greg (Greg Ward) To: chas@hobbes.lbl.gov Subject: Hello Hello! Hi Chas, Nice to hear from you again. Yes, I've had problems with the Macintosh not writing out its files completely. Often, I'll compile something and everything looks OK, no error messages, but when I go to run the program -- SPLAT! Jennifer is now set up to make Mac distributions, so she can get you fresh copies of all the binaries on floppies, or you can get slightly older binaries from the distribution using: tar xf ~greg/ray/dist/ray1R3.1.tar ray/bin.mac I can also upload the newest and grooviest versions as soon as I recompile them on the IIfx at home. (Fortunately, the IIfx doesn't seem to have the same trouble writing out its files.) I can probably upload them as soon as (my) tomorrow. I don't know what to say about your light source without examining the files myself. I assume that you are being careful to put endcaps on your cylinders, watch the surface normal directions of all surfaces, and be sure that the antimatter does not intersect the illum sphere. For a sine surface distribution on the sphere, you can try something like: hive_bright = 1+A1*sin(Pz*2*PI); A1 sets the degree of variation (1 being zero to 2 times the original brightness), and the scale factor determines the frequency of the variation. I have thought about various hacks for secondary sources such as those you suggested, and all have failed my basic tests for inclusion in a simulation: 1. Must be general enough to be useful. 2. Must not be so complicated that it introduces nasty bugs. Examples of borderline cases for the above tests that I have included are instances and antimatter. Both have been real pains in the hiney, but essential for certain types of models. I think you see the problem already with secondary sources. Besides being very complicated, it is not very general if you treat only planar mirrors. Also, what do you do about small mirrors and mirrors that see each other? Both cases can lead to an astronomical growth in the calculation if you do indirect source testing. Rather than dig into this can of worms, I am going to be trying what I hope is a more general approach to secondary contributions. I will write a separate ray tracing procedure to follow rays from the light sources and compute secondary source distributions at user-specified locations. This approach also has the advantage of not introducing bugs into a working program. I don't see how to get away from the user-specification requirement at this point, since it requires a fair amount of intelligence to know where the significant sources of secondary illumination will show up. However, the user will have the capability to enclose objects and declare secondary emitting "volumes" in this way. Thus, even something as complex as a glass chandelier could be tagged as a secondary source by enclosing with a big "illum" and handing it to this other program. Unfortunately, it will take me some time to implement all of this. I don't expect to be really finished before the end of my stay in Switzerland, which may be too late for your particular project. I don't mind if you send this discussion yourself to the ray list. If you don't, I will next time I make a digest. (It's a bit overdue -- I was just waiting to announce 1.4 at the same time.) -Greg Date: Mon, 15 Apr 91 22:58:33 PDT From: chas@hobbes.lbl.gov (Charles Ehrlich) To: greg@hobbes.lbl.gov Subject: What else can't Radiance do? Greg, I'm excited about your current plans/action to write a forward ray tracing pre-processor. I think that it will greatly enhance Radiance's ability to simulate more complex lighting schemes. I have questions, however, as to just how general of a solution it is going to turn out to be. I also don't like the fact that it requires an additional time-consuming step in the ray tracing process. It seems like the pre-processor is very well suited for tasks like modelling light sources with lots of refracting/reflecting elements, but much less well suited for the kind of application I described earlier, that of a room with mirrors on the wall. My first question is in what ways would the implementation of a "smart backward ray tracing (SBRT)" solution as I described before not be a general solution? I've thought about this for the last two weeks and could not think of one geometric scenario in which the SBRT wouldn't work. Was my presentation of how it might be implemented clear? My thoughts about the use of a SBRT in the scenario in question-- a room with mirrors--are that it seems like the SBRT would be a much more efficient approach to determining secondary contributions because the secondary source ray already has a good idea of which sources to do shadow testing of because it has a "favored direction" as determined by the angle of incidence with the reflective/refractive surface. In other words, only those sources within a defined solid angle around the incident ray's direction need to be shadow tested. The secondary shadow testing would follow the actual path that the light would follow from the source to the originating point in the scene. In other words, the convention of doing shadow testing to the geometric center of a polygonal source would not prevent other parts of the polygonal secondary source from potentially reflecting light onto the originating point in the scene. The same would hold true for cylinders and spheres as secondary sources. A completely different way of thinking about this whole shadow ray thing is that there is more than one solution to the equation of the light's path. The first solution is the shortest path whereas additional solutions exist where there are reflective surfaces. What is the feasibility of implementing the SBRT into Radiance? Just a quick guess about the number of additional lines of code written and/or the percentage of existing code that would have to be modified would be great. My second question surrounds the RGB issue? How much of a problem is it that not all of "reality" is describeable with the RGB color model...what percentage of real world phenomenon are left out? My thoughts on this one are that if accurate measurements of a material's color and reflectance are going to be made with expensive equipment, how much more difficult is it to extract full-spectrum color reflectance information than simply RGB? Specularity? What is the feasability and usefulness of implementing new material types that used full-spectrum samplings of a material's color instead of RGB? I'm imagining a material that uses an external data file not unlike the data files used for illum distrubutions which contain iterpolatable values at defined intervals along the spectrum. Again, rough estimates of number of lines needed to be written/modified would be great. The next question is an easy one. What about cylindrical sources? You've given me reasons why they don't make sense, but I continue to hang onto this idea based on an alternate way of calcuating shadow rays. The idea is to trace a shadow ray to the "nearest point" along the centerline of the cylinder rather than to say the midpoint of the centerline as you've mentioned to me before as being the "only reasonable way" and as such, it didn't make sense. Again, how feasible is this idea? My last few questions are much more general in nature. If you had unlimited funds and desire to make Radiance simulate "reality" to the "nth" degree, what other things would you implement? And, is there a theoretical limit to the greatest degree of accuracy (with unlimited processing power) that a ray tracing solution (forward and/or backward) can acheive? What is Radiance's theoretical limit? Yum, yum. Food for thought. Chas Date: Tue, 16 Apr 91 14:48:03 +0200 From: greg (Greg Ward) To: chas@hobbes.lbl.gov Subject: digesting... digesting... BURP!! Hi Chas, Thanks for all of your "food for thought". I'll do my best to answer your queries, but I'm not always perfectly clear on your ideas when I read them. I think that these discussions would go much easier in person, but that's not going to be possible for at least a while... First, the smart backward ray tracing idea. I'm not sure I understand this one fully. I think I get the basic tenet of searching for sources during the normal ray tracing process, and I think the idea has merit for large planar mirrors. As I understand it, you call each mirror a window into a virtual world, where you again look at each light source. The costs would be significant, especially for multiple reflections in facing mirrors where the calculation could grow exponentially. Using solid angles to limit tests would avoid some of these costs, but even checking such a list if it's long takes time. I don't know how to make the method work for mirrors that have texture or curvature. The idea as given here would take a couple hundred lines of code and a month or more of my time to implement. I would be interested to try it if I have the time. Second, the cylindrical light source question. I may do something with this. Not cylinders specifically, but I think it would be good to generalize the light source testing procedures to be more forgiving of geometry. Also, I would like the calculation to adaptively sample sources based on their proximity. For example, nearby sources should get more samples than a distant source of the same size. Intensity should also be used to determine sampling. I don't see why I couldn't work shape into such a scheme. The main problem is complexity in the source calculation, which should be avoided wherever possible since it is the most heavily used part of the program next to intersection testing. It has to be dirt cheap. That's why I can't currently model cylindrical sources -- I use a simple extended volume model that can't handle surfaces with holes in them. My guess for this one is 200 lines of code and 5 week's time. I will very probably tackle this along with the forward ray tracer, since proper source sampling is critical to the success of this effort. I am not sure how well the forward ray tracing preprocessor would perform in a house of mirrors. It sort of depends on the source sampling scheme I use. If I specified each mirror as a secondary source (illum), and the output distributions were all highly peaked, source sampling would critical to obtaining an accurate result. In general, I beleive the preprocessing time would not add nearly as much to the rendering time as would be required without it. This is the basic precept of creating such a preprocess to begin with. If it's not going to save time, then it's a mistake to use it. The forward approach is not that good if the final surface is a mirror. For such cases, the SBRT method you suggest sounds better to me also. And now for color. I wish I knew the answer to this question. Since I haven't played around with real spectral reflectance data, I can't really give you an opinion. It is not that difficult to implement better spectral sampling in Radiance. It would only require redefining the color operations I've built in as macros. Special programming would be required in a few spots where the macros weren't used. I'd estimate a couple of weeks time. The main reason I haven't done it is because I don't have the spectral data. You should first talk to a student who has been working with Gary Meyer at the University of Oregon in Eugene. He's been using Radiance to test various spectral sampling schemes. He hasn't modified the code, just used multiple runs to increase the number of spectral samples from 3 to 3*N. His name is Roy Ramberg and you can send e-mail to him at ramberg@cs.oregon.edu. I'd be interested in getting copies of your correspondance as well. I have considered separate scaling factors for x, y and z, and decided that it wasn't worth it. Implementing all of the resultant surface types, especially with sorts of skewed slices that would be left at the end of elliptical cones, is very nasty. Many months, many headaches. Also, you destroy the consistency of the lighting calculation if you perform non-uniform scaling. I think that any such operations should happen before the file is imported to Radiance. It's too non-physical for my taste. In the limited case of fitting doors and things like that, it's OK, so it might be worth having a special program called "xfscale" or something similar to do it. I wouldn't want to add it directly to xform, though. I don't know how to answer your general question about what I would implement or like implemented in Radiance given unlimited resources. As far as the simulation itself, I can only do what I know to do, what occurs to me along the way, and what you tell me I should do! I could make a long list of shit work that I would like done writing interfaces to various CAD programs, material and object libraries, image file formats, and so on. Mostly, I would like to see completed a nice front-end that would run Radiance and connect to one or more CAD systems that could really do a decent job modeling architectural environments. Personally, I'm not sure Architrion on the Mac is it, but it seems like a good place to start. One theoretical limit to light simulation using ray tracing is its inability to directly model diffraction and other wave effects. Ray tracing simulates light only as far as its particle behavior. You can fake wave interactions with surfaces, however, and this is usually where they are most significant. At any rate, diffraction is not so important to lighting. I would say that modeling polarization would be an interesting addition to Radiance, and not too difficult to implement. This has the most effect on multiple specular interactions, although skylight is also polarized to some degree and this can affect the amount of light entering windows at different angles. (Note that small inaccuracies are generally overwhelmed by the variability of daylight in general.) The main practical limitation of ray tracing (or any global illumination calculation for that matter) is finding all sources of illumination with a small sample set. You must figure out where to look, since you can't duplicate nature's feat of sending trillions of samples into the eye every second. In general, it is impossible to guarantee a solution to within a fixed tolerance. You can only say that you have some degree of certainty that your error is not greater than some amount. There is no guarantee that you won't miss that solar ray that hit your watch and bounced off a satellite, blinding some old woman in Leningrad. Well, it's three o'clock and time to get to work!!! -Greg ~s Radiance Digest, v1n3 Hello Radiance users, I am trying to keep caught up with my correspondance better and redistribute interesting tidbits before they turn to compost. Here is the latest batch of letters on various topics that seemed to me to be of general interest. Once again, I have tried to make it easier to find what you want by searching for the corresponding key. INST Installation of 1.4 and associated problems MOLEC Molecular modeling using Radiance PHONG Phong surface normal interpolation INTERN Radiance internals (image.c) PREVW X11 previewer for Radiance SPEC Spectral distributions curves DAYF Daylight factors and unknown programs VISION Vision-3D modeler for the MacIntosh -Greg ========================================================================= INST Installation of 1.4 and associated problems Date: Mon, 29 Apr 91 19:37:09 EDT From: richards@eleceng.ee.queensu.ca (Haydn Richardson) Subject: Radiance 1.4 I was playing with the new cabin model and got hung with the following error. I ran rview -av .01 .01 .01 -o sundev oct/cabin after building oct/cabin according to the Makefile. The error message is: rview: /images/local/lib/ray/rayinit.cal, line 61: syntax error: rview: and(a,b) : if( a, b, a ); rview: ^ '=' expected Does this indicate a problem with the function and() in rayinit.cal? I noticed that and() is defined in my old rayinit.cal as and(a,b) = if( a, b, a); I assume there is a trivial fix for this problem but I don't want to risk any possible interdependencies by substituting my old version. -Haydn Richardson Queen's University Date: Tue, 30 Apr 91 08:01:22 +0200 From: greg (Greg Ward) Subject: Re: Radiance 1.4 The new ':' definitions go with the new 1.4 release of Radiance, and you must recompile the programs for them to work. If you just want to look at the cabin model without recompiling, you can set the RAYPATH environment variable so it searches the location of the old library first, but you should include the new library location in it as well or it won't be able to find some of the files needed for the cabin. I recommend running makeall to install the new programs. -Greg Date: Tue, 30 Apr 91 11:16:59 EDT From: richards@eleceng.ee.queensu.ca (Haydn Richardson) Subject: Re: Radiance 1.4 Actually I did run makeall install and it compiled with minimal errors (all associated with missing XWindows files as we run in Sunview.) RAYPATH does include the location of the new library. The Makefile in the cabin directory complains that plasfunc and metfunc are unknown types. Is there something else that needs to be done to set up the rayinit.cal definitions? -Haydn Date: Tue, 30 Apr 91 17:34:19 +0200 From: greg (Greg Ward) Subject: Re: Radiance 1.4 Status: RO The new version of rview requires X11 to be installed in order to compile properly. You must still be running the previous version. You must manually modify the Makefile in the ray/src/rt directory to remove the X11 dependency. Make the following changes: 37c37 < DOBJS = devtable.o devcomm.o editline.o x11.o x11twind.o \ --- > DOBJS = devtable.o devcomm.o editline.o \ 41c41 < DLIBS = -lX11 --- > DLIBS = You must also remove all mentions of x11 from devtable.c (you might as well take out x10 while you're at it) and reset dev_default[] to "sun". I guess I should have had makeall complain a bit more on failure! -Greg Date: Tue, 30 Apr 91 17:12:06 EDT From: richards@vision.ee.queensu.ca (Haydn Richardson) Subject: Continued problems installing Radiance 1.4 I removed the x10 and x11 dependencies from the rt, util, and px Makefiles. However, I still have the following problems. glareval.c line 480 syntax error near ( which refers to the following statement. #ifdef ALIGN scansize = scansize+(sizeof(ALIGN)-1)) & ~(sizeof(ALIGN)-1); #endif and my compiled version of rview can no longer find sundev. An old version of sundev is in my bin and my bin is in RAYPATH. I appreciate your effort in assisting us through our growing pains. -Haydn Date: Wed, 1 May 91 08:31:59 +0200 From: greg (Greg Ward) Subject: Re: Continued problems installing Radiance 1.4 Hi Haydn, The syntax error is an extra parenthesis. I never tested this particular segment with ALIGN defined, so just s/-1))/-1)/ in the problem statement. Sorry about that. Guess I should update the distribution (already). The name sundev has been changed to just plain old "sun", so you just need to use -o sun instead of -o sundev, although in your case you could make sun the default (dev_default in devtable.c) and not specify a -o option at all. -Greg Date: Sat, 4 May 91 18:15:21 NZT From: pdbourke%ccu1.aukuni.ac.nz@csa1.lbl.gov Subject: Radiance and TAR message Loaded RADIANCE this evening but when I tar it I get the following: /dept/arc/pdbourke >tar xfo Radiance1R4.tar tar: ray/src/util/glareval.c - cannot create I thought I may have a corrupt copy from the FTP so redid it, same result! Any suggestions?? Date: Mon, 6 May 91 08:31:43 +0200 From: greg (Greg Ward) Subject: Re: Radiance and TAR message Oops! Sorry about that! Glareval.c is a duplicate that was added at the end of the tar tape. Apparrently, this won't work unless you change the mode of the first extracted file during the extraction and before it gets to the second one. You can ignore the error for now, but you may get a syntax error during compilation of this file later which you can also ignore. I will repair the distribution and you can try again later if it concerns you. -Greg ======================================================================= MOLEC Molecular modeling using Radiance Date: Fri, 26 Apr 91 18:15 EST From: cristy%ESSUN3%ESVAX@dupont.com Subject: Radiance I have been looking for a renderer that can do a good job modeling the area of intra-penetration of two nearly transparent intersecting spheres. I am hoping Radiance may solve this problem. I will be experimenting with materials and Radiance parameters hoping I can finally get a good rendering. If you have any suggestions that would nudge me in the right direction with Radiance, I would be grateful. I just started with Radiance and I have been having trouble getting a correct view so I can look at the window directly for the model described in the tutorial. The default view as specified in the tutorial looks at the wall opposite of the window wall. I tried different view points and view directions without luck. One thing that confuses me is in rview the point retains its original unnormalized form: 2.25 0.375 1, however, the view direction appears to be normalized (-0.25 0.125 -0.125 is reported as -0.8 0.408 -0.408). Is this the correct behavior? Do you know the correct view point and view direction so that the wall with the window appears in the image? Thanks in advance. cristy@dupont.com Date: Mon, 29 Apr 91 11:41:03 +0200 From: greg (Greg Ward) Subject: Re: Radiance Dear Jonn, Unfortunately, I do not have a copy of the files used in the tutorial so I will have to guess at the viewpoint from the input description. You can try the following parameters for rview to get a view of the window: -vp .1 .1 .75 -vd 1 .8 0 -vh 60 -vv 45 This puts the viewer in the corner looking towards the opposite walls. From there, you should be able to adjust the view to your liking using the "aim" command from within rview. You are right that the view direction gets normalized by the program. Since it is a vector indicating only the direction to look, it's magnitude is irrelevant. If you give a vector of 1 2 .5 it is the same to Radiance as if you had given 2 4 1 or any positive scaling thereof. -Greg P.S. Regarding your question about transparent spheres. I think I need some more specifics. Are the spheres of a solid material, or like soap bubbles? What problems have you had with other renderers? Date: Mon, 29 Apr 91 12:45 EST From: Cristy Subject: Re: Radiance Sorry about not supplying enough information about transparent spheres. The two problems I am trying to solve has to do with molecular modeling. Many times a chemist will come to me with a molecule he/she is interested in rendering for publication. Typically they want a cluster of atoms in the center of the molecule to be opaque and the surrounding atoms to be almost totally transparent. That way you can see the structure of the molecule and still emphasize the area of interest (typically the center atoms). I have tried many raytracers (RAYSHADE, VORT, DBW, TRACER, etc.) and they produce a nasty artifact at the area of intersection of two transparent spheres. For example, assume two atoms represented by spheres that intersect about 20% of the total area. The area of intersection normally appears to be black in all the raytracers I used. This black area interferes with the transparent effect I am looking for and distracts from viewing the area of interest in the center of these outer transparent spheres. Of course this effect gets worse the more transparent spheres you have-- to the point where you cannot see the opaque atoms in the center. So I have been looking for a renderer that correctly models the area of intra-penetration of two transparent spheres. I looked at radiosity programs, talked with experts in the field (Pat Hanarhan for instance) but have not found a program that handles this problem well. I do not have the expertise to write my own algorithm so... I am hoping Radiance may work. I am looking at the problem now in Radiance, but I thought that you would have a better feel of Radiance's capability to handle the problem correctly. Also perhaps you know of the correct material properties and other Radiance parameters to solve the intersecting transparent sphere problem. Thank you in advance cristy@dupont.com Date: Tue, 30 Apr 91 08:44:22 +0200 From: greg (Greg Ward) Subject: Re: Radiance I'm guessing that the problems you've encountered with other ray tracers has to do with thier handling of dielectrics. If you were modeling the spheres as solid glass objects, there would definitely be some confusion as the two objects interpenetrated. You would be best off modeling the spheres as an outer surface and an inner surface with a small difference in radius. Thus, you would be intersecting two "bubbles" rather than two solids. Fortunately, Radiance has a material type that allows you to model an infinitely thin glass object with a single surface. The material type is called "glass" and the parameters are the transmission in red, green and blue, which you will probably want to set to 1: void glass clear_glass 0 0 3 1 1 1 clear_glass sphere bubble1 0 0 4 x1 y1 z1 r1 clear_glass sphere bubble2 0 0 4 x2 y2 z2 r2 ... This should give you the desired effect. Again, you should be able to get similar results by using two slightly different concentric spheres (with the inside radius negative, or however they specify an inward surface normal). -Greg Date: Fri, 31 May 91 12:04 EST From: cristy%ESSUN3%ESVAX@dupont.com A fellow scientist wants to model atomic orbitals with Radiance. Unfortunately there is no easy way (that he knows of) to express orbitals in terms of cartesian space. The equations are always in the form of a spherical harmonic. Is it possible to express surfaces in spherical coordinates with Radiance? Thanks in advance. Date: Mon, 3 Jun 91 16:20:01 +0200 From: greg (Greg Ward) Subject: orbitals Cartesian and sphereical coordinates are of course convertible using a simple transformation: x = rho sin(theta) cos(phi) y = rho sin(theta) sin(phi) z = rho cos(theta) I suppose what you are really asking is if there is an easy way to represent surfaces defined by some arbitrary (in this case spherical harmonic) function. The answer is a qualified yes. The generator program gensurf can be given a parametric description of the surface in terms of two independent variables (for a spherical harmonic these would probably be theta and phi) from which it produces a tesselation of the desired surface into quadrilaterals and triangles. Thus, the Radiance rendering programs themselves do not model arbitrary parametric or implicit surfaces, but gensurf can be used to approximate most parametric surfaces as a collection of polygons. Unfortunately, I have not done anything about modeling implicit surfaces (ie. surfaces described by a function of the form F(x,y,z) = 0). Tesselating such surfaces is not an easily solved problem, and I have not yet had a strong enough need for them. -Greg ======================================================================= PHONG Phong surface normal interpolation Date: Mon, 6 May 91 12:55:44 EST From: Eric Ost Subject: smooth surfaces using normals If I have a surface defined using discrete polygons, actually triangles, is there a way that I can use surface normal vectors to derive a smoothly shaded appearance? For example, the human form geometry we have access to consists of approximately 3500 triangles. When we render instances of this geometry the edges between triangles are painfully apparent. Is there some way to smooth these edges without resorting to interpolating the triangles themselves and creating a geometry database with an increased number of polygons? Thanks. eric Date: Tue, 7 May 91 08:57:35 +0200 From: greg (Greg Ward) Subject: Re: smooth surfaces using normals Hi Eric, Yes, it is quite possible to interpolate the surface normals, using a procedural texture (texfunc) applied to the elements individually. Of course, this is not very convenient if you already have the database, but it can be done using rcalc. Unfortunately, the math for this procedure is not very straightforward, and the only place I've done it is in the Phong shading procedures of gensurf (ray/src/gen/gensurf.c and see also ray/lib/surf.cal). I suppose this would be a nice feature to have built into Radiance directly, but I didn't feel that the method worked well enough to warrant it. Specifically, Phong (bilinear) surface normal interpolation doesn't work for some degenerate cases or for concave polygons, and I have no idea how to apply it to polygons with more than four vertices. I think there are some more general surface normal interpolation schemes floating around the literature, but I couldn't really recommend one to you because I haven't tried any of them. Even gensurf may not help you that much, since I wrote the code to handle only paired triangles, and never got it to work right for lone triangles. I hate to be so discouraging. This is really something I would like to see happen, so if you can find a method that you think will do the trick, I will help you create an rcalc procedure or even a C program that implements it. -Greg ======================================================================= INTERN Radiance internals Date: Thu, 9 May 91 16:45:32 EDT From: richards@eleceng.ee.queensu.ca (Haydn Richardson) Subject: View Plane Transformation Hi Greg, I'm trying to write a function which will read the 3-d world coordinates of an object, as specified in a .rad file and the view parameters from a view file to generate the corresponding image plane coordinates. My major problem is defining the relationship between the distance between the view origin vp and the view plane in terms of the -vh and -vv parameters specified in the view file. It seems to me that this information is embedded in the function viewpixel() which is in the file image.c. However, I have been unable to find where this function is called from if at all. I would be most appreciative if you could elaborate on the comments for this function, or address my problem more directly. For example is double d in viewpixel() the distance between the viewpoint and the view plane? What are the expected input and output interpretations of the parameters xp,yp,zp? What is the expected input p? As always thanks for the assistance -Haydn Richardson Queen's University at Kingston Date: Fri, 10 May 91 11:57:28 +0200 From: greg (Greg Ward) Subject: Re: View Plane Transformation Hi Haydn, So, we've been digging in my code, have we? I don't get many questions about internals, so please forgive me for the confusion that follows. The distance between the view point and the image plane is undefined, since the image plane is an imaginary entity. Viewpixel() takes a point in world coordinates (ie. from the Radiance scene description) and computes the corresponding image position in normalized coordinates for a particular view. The input point, p, is in world (x,y,z), and the return values *xp and *yp are in normalized view coordinates. These coordinates run from (0.,0.) at the lower left corner of the image to (1.,0.) at the lower right and (1.,1.) at the upper right. The returned value *zp is distance along the view direction from the viewpoint to the world plane containing the point p. Perhaps this is the grail you seek. I apologize for the unreadability of my code. It even gives me trouble sometimes. You should ignore anything called i, j, k, d, etc. These are almost always temporary variables whose meaning changes depending on where you are in the procedure. Such is the case in viewpixel(). The viewpixel() routine is used by certain picture processors such as px/pinterp.c and util/glareval.c. I don't recommend learning how to use it by reading these modules, however. They are rather nasty. Good luck! I'm gone all next week, so any further questions will have to wait until I get back for a response. -Greg ====================================================================== PREVW X11 previewer for Radiance From: sumant@shakti.ernet.in Subject: Radiance input Previewer. Dear Greg, I am planning to use the Radiance input data format for my experimentation in Image Synthesis. I've just written its line drawing previewer for X. I've successfully tested it on 3 UNIX Platforms. If U want it to be included in your PD distribution I'll be very happy to send it to U. If U make any scene data public, pl let me know. I'll be interested in using them. ---- sumant (email : sumant@shakti.ncst.ernet.in) ------------------------------------------------------------------ Sumant Narayan Pattanaik N.C.S.T. Juhu, Bombay 400 049 Date: Tue, 21 May 91 08:59:24 +0200 From: greg (Greg Ward) Subject: Re: Radiance input Previewer. Hello Sumant, That's great! I'd love to try your previewer out, and include it for distribution. I am in the process of setting up scene models and programs for public redistribution via anonymous ftp, and I'll let you know when it's ready. In the meantime, you can deposit your previewer in the public ftp directory on hobbes.lbl.gov (128.3.12.38) where I can retrieve it. If it's small enough to go by e-mail, you can send it to me directly at "greg@lesosun2.epfl.ch" instead. Thanks a lot! -Greg Date: Fri, 24 May 91 13:25:35 +0200 From: greg (Greg Ward) Subject: Re: Radiance input Previewer. Hello Sumant, Thank you very much for the previewer. I just tried it out and it works great! It should come in very handy for anyone using Radiance. Do you mind if I include it at our ftp site so that people can pick it up? Just a couple of minor suggestions. It would be nice if the previewer accepted multiple Radiance files, one after the other, or read from the standard input if none were given. This should not be very difficult. I noticed that you adapted the object file reading routine, so just giving it NULL makes it read from standard input. Also, you can avoid the need for specifying a bounding box by using the routines from oconv (bbox.c, cone.c, face.c, misc.c) to compute the bounding box for you. This would require two passes on the input file, however, and it is nice that the user can give a different bounding box to specify clipping so this may not be so important. For a long time, I required the user to include the bounding box in the input files directly! Thank you for such a wonderful service. I still haven't gotten the model library together, but you will be the first to know! -Greg Subject: Re: Radiance input Previewer. Date: Sun, 26 May 91 11:44:01 +0530 From: sumant@shakti.ernet.in Dear Greg, I'll do the needful. I'll not try the bounding box computation. I am planning the following modifications. 1. The input parser in not rugged. I am now adapting it from "readobj.c". 2. The color of the objects are arbitrary now. I'll take the hints from the input file and color them accordingly. I'll come bcak to U in a day or two. I want that it is avaiable to public. However, i havenot asked my boss yet. I'll do it when I am through. I donot see any problem. In any case most of the code used are assembled from book or PD software. So it is really for public consumption. If there is any problem I'll substitute all references of my name by "anonymous" and send it to U. --- sumant Date: Tue, 28 May 91 08:56:45 +0200 From: greg (Greg Ward) Subject: previewer Helo Sumant, Thank you for the new version of your previewer. I tried it out and it works very well indeed. Thank you for getting (and granting) permission to redistribute it. I am in the process now of setting up the public ftp site at hobbes.lbl.gov (128.3.12.38). I will put your software under pub/programs, with a one-line description of its function. At the same ftp site (by tomorrow, hopefully) you should find Radiance objects and models that you may use under pub/objects and pub/models. Thank you again for all of your help! -Greg Date: Tue, 28 May 91 13:44:39 +0530 From: sumant@shakti.ernet.in Dear Greg, Thank U for the message. I'll try to get the new version of Radiance from your site. The earlier version's (the version I got in the 1st Week of May) documentation was a bit short. I hope it is improved now. About the previewer, I must tell U the bugs I know of. 1. In X11.c line 103 fprintf(..) has missing file pointer. Please correct it to fprintf(stderr,....) 2. The input parser does not support yet the description in the form "modifier alias identifier reference" and for the other description form "modifier type identifier n s1 s2 s3 ... sn 0 m R1 R2 ... Rm" the parser takes the sequence literally in terms of lines in which they appear. I'll correct this problem by adapting to your Parser and when I am ready I'll send U a copy. However, I'll be a bit late. Thanks again. Regards. ---- sumant (email : sumant@shakti.ncst.ernet.in) ------------------------------------------------------------------ Sumanta Narayan Pattanaik N.C.S.T. Juhu, Bombay 400 049 ===================================================================== SPEC Spectral distributions curves Date: Tue, 28 May 91 09:16:16 -0400 From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) Subject: Radiance archive I have a user here at OSU who has done some work with spectral distribution curves. He asked me if there any way to include these distribution curves, either for light sources or objects, in RADIANCE. I told him that I didn't think so. When your mail arrived just now I thought, well, it won't hurt to ask the one person who can give a definitive answer. Also, is there any .ies files available? I've got the "ies*.{rad,dat}" but don't have any IES input files. Just curious. We're having great fun with RADIANCE around here. It's going to be a mix of Industrial Design students and Art students using it (we have a sophisticated scanline renderer as the main renderer but I've introduced RADIANCE as an alternative). steve Date: Tue, 28 May 91 15:51:11 +0200 From: greg (Greg Ward) Subject: Re: Radiance archive Hi Steve, It is possible to make multiple passes with Radiance, using the three RGB channels provided to mean some other selection of spectral samples. For example, you could have three sets of material files, each with slightly different RGB values corresponding to different sample locations. (I recommend interleaving the samples.) You then run rpict on each one and produce three output pictures. (Be sure to set -sj to 0 so the pixels correspond.) These three pictures have a total of 9 spectral samples, which you can combine however you want using pcomb before the final display. This is obviously less efficient than having more spectral samples in the calculation itself, but until I have the spectral response data, I can't see the sense in implementing more samples right at the moment. Regarding IES luminaire data, I put what I have into the archive directory pub/iesdata. Thank you for reminding me of this. Although I didn't have much to contribute myself, maybe others will pitch in. -Greg From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) Subject: Radiance archive Thanks! I don't think this is what my user will want to hear (it sounds rather time-consuming) but hey, it WILL work. steve ====================================================================== DAYF Daylight factors and unknown programs [The following is in response to a letter from John Mardaljevic at Leicester Polytechnic. I am too lazy to retype the letter here, but John was having some problems with Sun Open-Windows and Radiance version 1.3, and he also had some questions about undocumented programs and computing daylight factors. Incidentally, I have added documentation for cnt, neat, and total and plan to have a daylight factor calculation script ready in the next release.] Date: Wed, 29 May 91 10:02:09 +0200 From: greg (Greg Ward) To: edu@leicp.ac.uk Subject: Radiance Hello John, Your letter was just forwarded to me regarding your questions on v1.3.1. The problem you are having is an initialization problem that is associated with the open look window manager (olwm). It has been fixed in version 1.4, which is available by anonymous ftp from hobbes.lbl.gov (128.3.12.38). If you do not have access to ftp and do not want to wait for another upgrade via mail, I can send you the new version of x11image.c by e-mail and you can recompile it yourself. Alternatively, you can run twm or some other window manager instead of olwm and x11image should work. X11image never has a command window, so that is normal. The manual page for x11image is identical to ximage, and I even renamed the program to ximage in the next release. Sorry for the confusion. Unfortunately, the problems with xshowtrace haven't been fixed in release 1.4, although I've fixed them since 1.4 and the program will work in the next release. Rtrace should work fine, since it doesn't depend on the window system unless you run it in conjunction with x11image. As for the missing manual pages, I apologize. I never did write up some of the programs I include on the distribution since I figure most people won't be using them. There are manual pages for calc, rcalc and ev in ray/src/cal/man. I am sorry they are not in the expected place. The only documentation for the others is the source code, but here is a one line description of each: cnt - integer counter, try "cnt 2 5" colorscale - generates color scale picture genwindow - calculates light from window with venetian blinds greyscale - generates grey scale picture lam - joins lines from multiple files lookamb - examines contents of Radiance ambient file mt160r - output driver for Mannesman-Tally dot matrix printer neat - neatens up columns of numbers and aligns decimals oki20c - output driver for OkiData OkiMate 20 color printer paintjet - output driver for HP color paintjet printer sun.com, sundev - driver programs used by rview for sun windows total - sums up columns of numbers As for daylight factors, the most efficient method is to use rtrace with the -oi option (will be -I in 1.5) and give it your list of workplane points with up vectors (ie. 0,0,1). Then, take the output numbers and divide them by the ambient level given to you by gensky (multiplied by pi). Here is a more complete example: Let's say we have a room that we have compiled into room.oct and we want to calculate daylight factors on a 10x8 grid running from x=1 to x=10 and y=5 to y=11 at a height z=3. First, we run gensky manually for the day and time we are interested in, like so: gensky 5 29 10 -a 47 -o -7 -m -15 Of course, you would adjust the latitude, longitude and standard meridian to correspond with your site location. The above produced the following comment in its output: # Ground ambient level: 7.582072 This ground ambient level corresponds to the irradiance/pi due to the sky without direct solar, which is what we will divide into our irradiance values from rtrace to get daylight factors, thus: cnt 10 8 | rcalc -e '$1=$1+1;$2=$2+5;$3=3;$4=0;$5=0;$6=1' \ | rtrace -oi room.oct | rcalc -e '$1=(.3*$1+.59*$2+.11*$3)/PI/7.582' \ > room.df The factors of .3, .59 and .11 are to get from RGB to brightness. Note that the values in the output file do not have their associated input values with them. You can add them in again like so: cnt 10 8 | rcalc -e '$1=$1+1;$2=$2+5' | lam - room.df \ | neat 4.9 > room.df.final Notice that I tried to use as many of the programs you asked about as I could! Notice also that doing real calculations with Radiance currently requires long, unreadable command lines. We hope to make some of this easier in the next release by adding shell scripts to do these kinds of useful things without requiring so much user guidance. I hope some of this helps. -Greg ================================================================= VISION Vision-3D modeler for the MacIntosh Date: Fri, 17 May 91 9:58:22 NZT From: pdbourke@ccu1.aukuni.ac.nz To: ray@hobbes.lbl.gov Subject: Mac modeller The public domain modeller Vision-3D for the Mac II family is about to support Radiance data files as an export option. This has already been done but the copy on our FTP site hasn't yet been updated (I want to put some more features in the next release) If anyone is interested however the current version of Vision-3D with Radiance file export can be made available. -- | Paul D Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) | Date: Fri, 17 May 91 11:16:42 +0200 From: greg (Greg Ward) Subject: Re: Mac modeller Hi Paul, Thank you for putting a Radiance output option into your modeller! I have had a student working on a translator from Sculp 3D RIB files to Radiance with limited success. It would be great to get a hold of a real 3D modeller for the MacIntosh II, since that's the computer I use most often. Does your program work with A/UX 2.0? I have Radiance working on the Mac under this operating system now. -Greg Date: Sat, 18 May 91 10:38:22 NZT From: pdbourke@ccu1.aukuni.ac.nz Subject: Re: Mac modeller > > Thank you for putting a Radiance output option into your modeller! I have > had a student working on a translator from Sculp 3D RIB files to Radiance > with limited success. It would be great to get a hold of a real 3D > modeller for the MacIntosh II, since that's the computer I use most > often. Well, Vision-3D is a shareware modeller...MicroStation is a real modeller. The RayShade and Radiance export facility has received so much attention that I will put a preliminary copy of the next version of Vision-3D in our FTP directories early next week. The readme file will indicate when this has been done. In case you don't know we are ccu1.aukuni.ac.nz (130.216.1.5) The directory you want is mac/architec. > Does your program work with A/UX 2.0? Don't know. What are the issue, I use Think C (never even seen AUX) > I have Radiance working on the Mac under this operating system now. Is it possible to generate a version that would run under the finder OS. This would be of great interest to many users. -- | Paul D Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) | Date: Tue, 21 May 91 08:55:41 +0200 From: greg (Greg Ward) Subject: Re: Mac modeller Hi Paul, Unfortunately, I don't have a version of Radiance for the MacOS, and I don't expect to in the near future. The rendering program in Radiance was really meant to run in the background and may require large amounts of memory for complex scenes, so I don't think it would be very simple to port it to the native Macintosh OS. You are welcomed to try! Frankly, I haven't got much patience for menus and that sort of programming. A few years ago, someone I hired wrote a 3d editor for Radiance files on the Mac and the code was bigger than my renderer! It never quite worked to my satisfaction (it crashed a lot) so I didn't advertize it much. This is another problem with the Mac environment -- frequent crashes that spell disaster for background processes. Also, the system does not allow that much CPU time to such programs, preferring to reserve as much as possible for the application running in the foreground. Perhaps this will change a bit with version 7, but I doubt it. I will pick up the modeller as soon as you have the right release on there. Thanks again. -Greg Date: Wed, 22 May 91 7:48:49 NZT From: pdbourke@ccu1.aukuni.ac.nz Subject: Re: Mac modeller > Unfortunately, I don't have a version of Radiance for the MacOS, and I don't > expect to in the near future. The rendering program in Radiance was really > meant to run in the background and may require large amounts of memory for > complex scenes, so I don't think it would be very simple to port it to the > native Macintosh OS. Yes, I tried to render some landscapes woth 2 million polygonal facets, no luck yet and our SGI has 64MB ! > You are welcomed to try! Frankly, I haven't got much > patience for menus and that sort of programming. A few years ago, someone > I hired wrote a 3d editor for Radiance files on the Mac and the code was > bigger than my renderer! You shouldn't be very surprised at this, a modeller is much more complex and varied than a renderer. > It never quite worked to my satisfaction (it > crashed a lot) so I didn't advertize it much. This is another problem > with the Mac environment -- frequent crashes that spell disaster for > background processes. Can't remember my last crash! I am afraid that the impression people have of the Mac as unstable is almost always due to substandard software especially INITs, games, utilities in the public domain. The system software and toolbox is only resposible for a small fraction of the problems people have with Macs. > Also, the system does not allow that much CPU > time to such programs, preferring to reserve as much as possible for > the application running in the foreground. Perhaps this will change > a bit with version 7, but I doubt it. No it hasn't really changed. An application can specifiy how often a background task is "looked" at and for how long. Most developers tend to give background tasks only a peek every now and then because they want maximum performance for themselves. I have only seen one application which allowed the user to specify the sharing load, a good idea though. > I will pick up the modeller as soon as you have the right release on there. > Thanks again. I have placed a beta version of the modeller that supports Radiance in our archive. There is an administrative problem though with our site, they are worried about the huge amount of traffic they are experiencing. -- | Paul D Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) | Date: Wed, 22 May 91 09:26:03 +0200 From: greg (Greg Ward) Subject: Re: Mac modeller Hi Paul, I just picked up your modeller yesterday, and I must say I'm impressed. I hope it took you a long time to write and document. Please don't tell me you did it during donut breaks on Thursdays. If you want to render large scenes with Radiance, you need to somehow break them down into repeatable groups using instances. Of course, this doesn't work if what you're rendering doesn't have any inherent repetition inherent repitition. Almost everything I do does. I agree that crashes are caused by poor inits more than bad system software, but I have little control over what other people do to their machines and they seem to like those silly little bastards. I've been using a shared machine here at EPFL and every day I switch it on it seems like there's another one. Maybe they breed overnight. I go for a bite while the thing boots up. I didn't find the export option for Radiance in the modeller. It's not in the expected place (File Export...). Are you sure it was on yesterday's release? If network traffic is a problem, I would be happy to put Vision 3D on my ftp site in California. That is, if you have no objections. From the manual, it sounds like you were (thinking?) of marketing the program at one time. I'd be interested to hear what your current attitude is on that. By the way, I ran Vision 3D under A/UX to try it out, and everything I tested seemed to work fine. The only problem I noticed was that the cursor disappeared after I quit. There must be some deinitialization that's required for A/UX that the normal Finder takes care of. I've noticed other strange cursor happenings in the past with other programs as well. The other thing that wasn't quite right was that the 32-bit clean flag was not set on the program. That's a function of the compiler, I suppose, though it is possible to set this flag yourself using ResEdit. The requirements for applications running under A/UX (as I understand it) besides 32-bit clean are that you stick with Apple's guidelines (which I guess you have) and that you don't access certain little-used toolbox routines that the A/UX people haven't implemented yet. Most of these have to do with strange color table manipulations as near as I can tell. Once I get the correct version of Vision 3D (assuming I ain't got it), I'll do a more thorough testing under A/UX so I can tell you if there are any other problems. I don't expect there will be too many, anyway. Do you know Robert Amor? How did New Zealand get so many hot shots when they have so little money? As I said before, I'm impressed. -Greg Date: Wed, 22 May 91 20:32:14 NZT From: pdbourke@ccu1.aukuni.ac.nz Subject: Re: Mac modeller > I just picked up your modeller yesterday, and I must say I'm impressed. > I hope it took you a long time to write and document. Please don't tell > me you did it during donut breaks on Thursdays. I did work on it fairly well full time for a few months. I would love to have time to rewrite the "definitive" modeller now that I know know how. ie: Vision-3D was my first 3D software but it has been left behind because of other commitments. I would love to write a good modeller specifically for RenderMan but it's probably a years work! > I didn't find the export option for Radiance in the modeller. It's not > in the expected place (File Export...). Are you sure it was on yesterday's > release? Sorry, humble...humble... my mistake, don't know how it happened, etc etc... I will fix up the directory, I might try mailing you the application. > If network traffic is a problem, I would be happy to put Vision 3D on > my ftp site in California. That is, if you have no objections. From > the manual, it sounds like you were (thinking?) of marketing the program > at one time. I'd be interested to hear what your current attitude is > on that. Please put it somewhere in the US and let me know where. There has been huge traffic loads to our site, I am becoming unpopular with the Computer Centre administrators. The current position is that the program is shareware, which means in my book that you should feel free to copy and distribute the software. If you keep and use it for any sort of financial gain then I would appreciate $120 NZ (approx US$80) > By the way, I ran Vision 3D under A/UX to try it out, and everything I > tested seemed to work fine. The only problem I noticed was that the > cursor disappeared after I quit. There must be some deinitialization that's > required for A/UX that the normal Finder takes care of. I've noticed other > strange cursor happenings in the past with other programs as well. The > other thing that wasn't quite right was that the 32-bit clean flag was not > set on the program. That's a function of the compiler, I suppose, though > it is possible to set this flag yourself using ResEdit. The requirements > for applications running under A/UX (as I understand it) besides 32-bit > clean are that you stick with Apple's guidelines (which I guess you have) > and that you don't access certain little-used toolbox routines that the A/UX > people haven't implemented yet. Most of these have to do with strange > color table manipulations as near as I can tell. Yeah, I have just installed system 7 and all my programs seem to work OK. I do use my own cursors when the mouse is in "my" windows (it looks like the normal crosshair cursor but it's not). You will probably find that if you quit from Vision-3D with the cursor away from my windows all will be fine. I'll put cursor initialising on my list of things to do. > Once I get the correct version of Vision 3D (assuming I ain't got it), > I'll do a more thorough testing under A/UX so I can tell you if there are > any other problems. I don't expect there will be too many, anyway. Thanks > Do you know Robert Amor? How did New Zealand get so many hot shots when > they have so little money? As I said before, I'm impressed. Yes, how do you know him? I met him last week, he's come up to the Computer Science dept here to do some brief contract work on an expert system for a building industry firm. His address here is robert-a@cs.aukuni.ac.nz He just gave me 40MB of radiance examples...! -- | Paul D Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) | Date: Thu, 23 May 91 09:02:01 +0200 From: greg (Greg Ward) Subject: Re: Mac modeller Hi Paul, Thank you for sending the beta version of Vision3D. I know what you mean about wanting to start over. I've put the modeller just by itself on anonymous ftp at hobbes.lbl.gov (128.3.12.38) under pub. You can put the other hqx files there yourself if you feel like it. I am planning to get the whole thing organized with drop off points and pick up points and read me files and so on, in a day or two. (How many weeks have I been saying this?) Actually, it was largely Robert Amor's idea to have a shared Radiance archive, and I recently got a lot of interest at the Eurographics workshop on rendering. Robert came by Berkeley at some point last year and we talked a bit. He knew an awful lot about Radiance and had some good suggestions and one or two programs for me. Also, we had been exchanging e-mail for some time on the topic of building data representation, a shared interest. I had a quick look at the Radiance export files from Vision3D and they look great. I haven't run any tests, yet, though, so I'll have to let you know how they turn out. I read quickly through your little article on CAD formats and took some of the things you said to heart. My next release of Radiance will be much more forgiving of cones and spheres with "illegal" radii so there will be a little less for you to worry about. I would be interested in any specific comments or criticism you have about the Radiance scene description format. It was basically written for my own (ie. the implementer's) convenience rather than ease of use, so I realize it is somewhat brain dead. (Scanf is my parser and printf is my formatter...) -Greg Date: Sun, 26 May 91 12:02:00 NZT From: pdbourke%ccu1.aukuni.ac.nz@csa1.lbl.gov Subject: Triangulate Something else of mine that now supports Radiance export...! Triangulate is a Mac II utility that takes a "random" distribution of samples of a surface and generates a triangulated (Delauney) or gridded mesh (user specified resolution) representation of the surface. We use it extensively for generating terrain models, the data is generated either from manual entry from site surveys or by digitizing existing contour maps. A number of export formats are supported, DXF, Super3D text, and now Radiance. The archive I uploaded has been passed through Stuffit (.sit) and then BinHex (.Hqx). It contains the application and the user manual in MS Word 4 format. -- | Paul D Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) | ~s Radiance Digest, v1n4 Dear Radiance User, Here is another culling of mail regarding Radiance. There are many new users since the last mailing, and if you are one of them I would like to send you my welcome. Also, if you would like back issues of this digest, just send me some mail. (No one has asked for any yet -- should this be telling me something?) Topics included in this digest are the following: GEOM Geometric Primitives in Radiance ATMOS Simulating Atmospheric Effects LARGE Large Radiance Models PORT Portability Issues (on the long side) PINTERP Uses for the Pinterp Program ===================================================== GEOM Geometric Primitives in Radiance Date: Sat, 8 Jun 91 17:24:43 NZT From: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: Polygon primitive To: GJWard@Csa2.lbl.gov Regarding my raving earlier about the ordering of polygons for the direction of facet normal...how about a special polygon primitive which is defined as double sided, it is really two polygons with the same vertices but one ordered clockwise and the other anticlockwise. It seems better to make this a primitive that the renderer "knows" about than to have the data files include both polygons. Does Radiance handle coincident polygons like that descibed above? Also I am fustrated by renderers which don't know about lines and therefore force me to turn lines into cylinders. It would seem nicer for the renderer to turn them into cylinders of radius = 1 pixel, ie: normally the modelling or translator software does not know the pixel size (image space not world) Paul D Bourke Date: Mon, 10 Jun 91 08:57:54 +0200 From: greg (Greg Ward) To: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: Re: Polygon primitive Hi Paul, As I said in an earlier mail, Radiance ignores polygon "sidedness" for all material types except dielectric. Thus, most polygons in fact act as though they are two-sided during rendering. Dielectric objects must be modeled as solids, since light refraction happens on entry and on exit, and a two-sided polygon would be equivalent to an infinitely thin volume, which should be modeled instead by the material "glass" that is designed for this purpose and that ignores surface orientation. The hack used by most scanline renderers of culling back-facing polygons does not really help that much in a ray tracer so I felt that all faces should be treated as two-sided in Radiance. This makes the modelers job a lot easier (whether it be a modeler programmer like youself or some poor fool like me using a text editor). To answer your question, "does Radiance handle coincident surfaces?" I would have to say no. There is a well-known problem in ray tracing which is avoiding a reintersection with a surface upon reflection or refraction. For polygons, it is an easy decision not to test the intersected surface again for intersection, but spheres and other curved surfaces can have multiple intersection and the decision is not so straightforward. The nicest way to avoid this problem is to insist that the first intersection be some minimum distance from the starting point, which precludes the possibility of properly handling coincident surfaces as well. I could discuss this topic in more detail, but I see my letter running off the top of the screen and sense that I should move on. Drawing lines does not make sense for a physically-based rendering program because lines in fact do not exist. I suppose true two-dimensional surfaces don't exist either, but for the purpose of light interaction they are a fair approximation of the real world. Physics aside, it is not easy in a ray-tracer to decide which pixels to paint with a line because the rays go into the scene from the pixels and may not even be aware when they pass close to a line. Line drawing works much better the other way where you start with the world coordinates of the line and draw a nice pixel-width object on the image. Believe it or not, Radiance sometimes does not even know what the pixel size is because it is frequently used in a luminance mode where it is not generating an image but is being used instead to calculate light levels. Also, since lines are non-physical, it would not be possible to shade them properly and they would wind up as these anomolous glowing objects in an otherwise natural-looking scene. For the purpose of rendering with a ray-tracer, it really makes the most sense to convert the lines into cylinders as you do and give them a radius that makes the most sense of the size of the object the lines are meant to represent, ie. grass or sticks or whatever. The lines may show up as thicker up close and thinner (or dissappearing) at a distance, but this is the nature of physical reality. Setting the line width proplerly usually means leaving it to the user. -Greg Date: Thu, 1 Aug 91 8:43:19 NZT From: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: A few questions To: GJWard@Csa2.lbl.gov I have been using radiance a bit now although generally in a simple minded way, I do have a question on which you may have some suggestions. What is the "nicest way" to include parameters in a scene description. For example, define some constants which are used thoughout the scene description. This would allow the user to change the constants to meet his/her needs. A real example, I would like to define a constant called RADIUS say, this would be used thoughout the geometry description of cylinders and spheres. Paul D Bourke From greg Fri Aug 2 08:38:51 1991 Date: Fri, 2 Aug 91 08:38:50 +0200 From: greg (Greg Ward) To: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: Re: A few questions Hi Paul, I have considered from time to time adding variables to the scene description but never did it. I guess I wanted to keep the job of parsing the scene files as simple as possible for other programs/programmers who might want to use the information. The first method that occurs to me for adding variables to the input file is with the C language preprocessor, /usr/lib/cpp, or better yet the macro processor m4. This will allow you to define variables as well as (macro) functions with arguments. I have not tried this myself, but I see no reason why it wouldn't work. -Greg Date: Sat, 3 Aug 91 14:31:53 NZT From: pdbourke%ccu1.aukuni.ac.nz@Lbl.Bitnet Subject: Radiance (of course) To: greg%lesosun1.epfl.ch@Lbl.Bitnet Thanks for the reply, I didn't think that parameters were possible in the Radiance scene description but thought I would check in case there were some as yet undocumented features. You may have noticed a new version of Vision3D in the Mac FTP folder. If not then you may want to change the file privileges. Has anyone else to your knowledge written a translator from Super3D (another Mac modeller, not all that powerful but widely used, it is to 3d modelling what Lotus is to speadsheets - the product to which others are compared). I have worked on one over the last week for a project here, before I develop it much further I would like to make sure there is not already something out there. I'll send you a GIF sometime soon of some 3D L-System plants I've been working on recently. They look quite good I think, the application that allows the user to specify the production rules, etc, exports to Radiance. Paul D Bourke Date: Mon, 5 Aug 91 10:00:04 +0200 From: greg (Greg Ward) To: pdbourke%ccu1.aukuni.ac.nz@Lbl.Bitnet Subject: Re: Radiance (of course) Hi Paul, Thanks for the new version of Vision3D. I did noticed it and renamed the files, getting rid of the old one and updating the README file. I have since made the README file in that directory writable so you can modify it if you like. I have had a student working (for some time) on translating the Renderman output of Sculp3D into Radiance format, but I don't know off hand of anyone who has worked with Super3D files. You can drop GIF (Targa, PICT, Sun rasterfile, Radiance pictures) off in the pub/xfer directory on hobbes with anonymous ftp. Which reminds me, I need to write some new image format translators... My prediction for computer science is that in two years the hardware industry will abandon the ASCII standard and the QWERTY keyboard and the few bitter programmers left will spend all their time hacking on new viruses. -Greg ======================================================== ATMOS Simulating Atmospheric Effects To: greg@hobbes.lbl.gov Subject: Atmospheric effects From: Jerrell Ballard Date: Thu, 13 Jun 91 11:31:34 EDT Hello Greg, Are you aware of any RADIANCE functions that *roughly* approximate atmospheric effects ? I have been generating some landscape scenes, and then post-processing to add the atmospheric effects. It would handy to be able to add atmospherics at time of rendering. Jerrell Ballard Geographical Information Systems Team Waterways Experiment Station United States Army Corp Engineers From greg Fri Jun 14 08:56:22 1991 Date: Fri, 14 Jun 91 08:56:18 +0200 From: greg (Greg Ward) Message-Id: <9106140656.AA06417@lesosun1.epfl.ch> To: ballard@mc1.wes.army.mil Subject: Re: Atmospheric effects Status: RO Hi Jerrell, If by atmospheric effects you are referring to scattering, absorption, etc., the answer is no, Radiance does not support it. You can, however, model high clouds and other patterns in the sky directly in Radiance. If this is what you are after, I can give you some hints and examples in another letter. For now, I'll assume that what you want is the former. The best post-process to get a rough approximation to scattering (and/or absorption) would use the -z option of rpict to produce a file of pixel distances then use this information to modify the colors in the final picture. This would be done most efficiently by a special purpose program, but you can do it also with the existing tools pvalue and pcomb. By way of example, let's say you want to imitate haze that fades to white as an exponential function of distance. You would first render an ordinary image with rpict, using the -z option to produce a distance file, like so: % rpict -x 512 -y 480 -z scene.z scene.oct > scene.pic Then you would use pvalue together with pcomb to apply the desired function to the pixels based on their distance. Pvalue is necessary to convert the machine floating point numbers in the z-file into a Radiance picture because pcomb only works on this format. % pvalue -r -b -h -df -x 512 -y 480 scene.z | \ pcomb -e 'ro=f(ri(2));go=f(gi(2));bo=f(bi(2))' \ -e 'f(p)=c*p+1-c;c=exp(-gi(1)/udist)' \ -e 'udist=50.0' - scene.pic > scene.fade.pic Note that it is necessary to repeat the image size to pvalue so it knows what to do with scene.z. The unit fade-out distance "udist" can be changed from 50.0 to whatever you like to provide the desired fade-out. If you tell me what kind of effects you desire (perhaps after some experimentation), I may even write a program to do it more efficiently for you (still as a post-process, though). I never did much with atmospherics because most of my scenes are indoors, and I prefer to create accurate physical simulations rather than quick hacks. However, atmospherics is one thing I don't expect to be able to treat correctly within Radiance in the near future, so a hack is the best I can do. -Greg ========================================================= LARGE Large Radiance Models Date: Mon, 1 Jul 91 12:01:59 NZT From: arch2@ccu1.aukuni.ac.nz Subject: Large Radiance Models To: greg@hobbes.lbl.gov Hi, I have been trying to get Radiance to work with models containing about 10,000 spheres, stored in a text file of about 1M, and keep getting the "out of octree" space message from oconv. In your first Radiance digest you suggest ways of increasing the size of models oconv can handle. None of these seem to do the trick. I have tried these values: MAXOBJBLK in object.h is 65535 MAXOBLK in octree.h is 524287 OSTSIZ in objset.c is 1047821 (It is prime) I also changed OBJECT to a long. oconv dies with this message: oconv: system - out of octree space: Not enough space According to ps -l oconv uses up to about 2,500K of memory. (The machine has 64M and I am allowed a maximum of 10M). The exit code is 2, if it helps. Also, what does the '-n' parameter do. On my "smaller" models '-n 7' or similar stopped the error message. Thanks in advance, Russell (for Paul Bourke) Date: Mon, 8 Jul 91 11:22:47 +0200 From: greg (Greg Ward) To: arch2@ccu1.aukuni.ac.nz Subject: Re: Large Radiance Models Hi Russell (and Paul?), I think your problems with oconv must be due to memory limitations, since it is a malloc failure that is stopping the process. If you can't find any other system variables or administrative things that our limiting your process size (Cray's operating system places limits on interactive sessions, for example), then you might try replacing the COMPAT=malloc.o to COMPAT=bmalloc.o in ray/src/{rt,ot}/Makefile and recompiling. This will use the system version of malloc rather than my home-grown routines. Sometimes people do some pretty strange things with memory allocation. The -n parameter to oconv makes the octree place more surfaces into the octree voxels. It is a good idea to increase this number for very complex scenes if memory is a problem. You can jack it up to around 16 with little degradation in rendering time -- at least for spheres. -Greg Date: Tue, 16 Jul 91 17:46:33 NZT From: arch2@ccu1.aukuni.ac.nz Subject: Large Radiance Models To: greg@lesosun1.epfl.ch I am still having "fun" with my large models. Would splitting the .rad file into smaller pieces and using oconv to merge them together help? Some of the programs I ran today wanted 64M of memory -- they were run in batch mode where processes are allowed that much memory. Thanks, Russell Date: Tue, 16 Jul 91 17:47:45 NZT From: arch2@ccu1.aukuni.ac.nz To: greg@lesosun1.epfl.ch These are the statistics printed out (from the routine PrintMemStats): oconv-l: system - out of octree space: Not enough space Memory statistics: bmalloc: 66437172 bytes allocated bmalloc: 3936 bytes freed bmalloc: 28696 bytes scrounged malloc: 11931552 bytes allocated malloc: 5796830 bytes wasted (48.6%) malloc: 6006992 bytes freed 238 * 512 395 * 256 720 * 128 257 * 64 12 * 32 53 * 16 2 * 8 346248 total bytes in free list Date: Tue, 16 Jul 91 10:05:02 +0200 From: greg (Greg Ward) To: arch2@ccu1.aukuni.ac.nz Subject: Re: Large Radiance Models Hi Russell, Thank you for the details on the oconv errors. Oconv should be able to handle 10,000 non-intersecting spheres quite easily. Your spheres must be intersecting an awful lot or you are using more spheres than you claim to be running over 64 Mbytes of memory! Things you can do about it: 1) Change your scene generator so as to avoid generating so many intersecting spheres. 2) Increase the -n parameter of oconv to 120. 3) Decrease the -r parameter of oconv by half or so. (This will cause a set overflow error if you decrease it too much.) 4) Try generating the octree in stages (as you suggested) by giving oconv progressively more spheres to add to the scene. You may have to use the -b option on the initial run of oconv to tell it what you expect the eventual scene boundaries to be. 5) Use the Radiance instance type to duplicate sections of your scene rather than enumerating everything. This is the best method to achieve geometric complexity without using up all available memory. You simply create a fraction of the scene you want, then instantiate it throughout the environment. Using heirarchical instancing, it is easy to create models with many millions of surfaces. Let me know if I can be of any more help. -Greg ============================================================= PORT Portability Issues Date: Tue, 16 Jul 91 22:27:29 CDT From: stephens@minnie.wes.army.mil (Mike Stephens) To: GJWard@Csa2.lbl.gov Subject: radiance greg, this is mike stephens at waterways experiment station (wes) in vicksburg, miss. i just got the 5th release of your program radiance 1.4 and plan to use it for several projects we have going on at the scientific visualization center (svc). i have tried to compile it on our sgi (4D) boxes and have run into some problems. i was wondering what sgi machines you have successfully installed radiance on? i get errors in the 'malloc.c' routine (v_pageshift not defined) also things in tty.c get 'twisted' somehow. we are running irix ver 3.3.2 on our sgi's. any help on this would be greatly appreciated. thanks, mike (stephens@slowhand.wes.army.mil) Date: Wed, 17 Jul 91 08:57:27 +0200 From: greg (Greg Ward) To: stephens@minnie.wes.army.mil Subject: Re: radiance Hi Mike, You need to change the COMPAT=malloc.o to COMPAT=bmalloc.o in the Makefiles in the src/rt and src/ot directories. The memory stuff has not been very well standardized under System V, so some of the definitions in my malloc.c cause trouble for some implementations. The routines in tty.c are really written for BSD derivatives, and don't work for any System V Unix's, but this module is only used by the AED 512 driver, which you probably don't need. I guess I made an error in my makeall script and included this driver when I shouldn't have. Anyway, it just won't be made properly -- everything else should work fine. If you want, you can change the line in makeall under the Silicon Graphics IRIS choice from special="aed" to special= I have never tried to compile 1.4 on an IRIS, just 1.3. I no longer have easy access to an IRIS workstation. (Actually, I have never had easy access to anything except a Sun 3/60.) Hope this helps! -Greg From: stephens@slowhand.wes.army.mil To: GJWard@Csa2.lbl.gov Subject: sgi's and radiance greg, many thanks for your quick response! the malloc problem could have been solved by yours truly if i had bothered to CAREFULLY read the comments in malloc.c. oh, well.... instead of the bmalloc=>malloc solution for the sgi anyway i changed malloc.c so that getpagesize was called as a system routine (which it is on the sgi irix 3.3.2) and also added an include because as it wass it couldn't find the type 'daddr_t' which is in in irix 3.3.2. did this last night after i wrote to you and viola the main critters got made!! aed still didn't but at least i had the majority of the goodies to play with. went in first thing today (7/17) and drew a pretty daffodil!!! your code is pretty slick...thanks for your efforts... atta boy... and all that stuff. take care mike (stephens@slowhand.wes.army.mil) ----------- Date: Wed, 17 Jul 1991 13:52 +0200 From: "Sigge Ruschkowski email:f87-sir@nada.kth.se or kjr@ekab1.ericsson.se" Subject: Radiance for the Mac To: greg@hobbes.lbl.gov Hi Greg, I read in RTNEWS that there is a version of Radiance for the Mac. What kind of Macs does Radiance run on? Sigge Sweden Date: Wed, 17 Jul 91 17:28:48 +0200 From: greg (Greg Ward) To: KJR@kkeka1.ericsson.se Subject: Re: Radiance for the Mac Hi Sigge, Radiance runs under A/UX (Apple's UNIX) on the Mac II family. Since most folks use the ordinary Mac OS, this doesn't do much good. But, if you're interested in A/UX for the MacIntosh, it's not all that expensive and Radiance will run on it. I've been using a Mac IIfx myself successfully to do animations using Radiance. A/UX costs around $500 in the US and X11 software is another $200 or so. The main drawback is that it requires 80+ Mbytes of disk space and X11 doesn't run well unless you have at least 8 Mbytes of RAM. Also, installation is difficult unless you buy it already installed on an Apple external drive (very expensive). CD-ROM is the next best installation method. You don't want the floppy disk product! -Greg Date: Thu, 18 Jul 1991 08:02 +0200 From: "Sigge Ruschkowski email:f87-sir@nada.kth.se or kjr@ekab1.ericsson.se" Subject: Re: Radiance for the Mac To: greg@lesosun1.epfl.ch Hi Greg, thank you for the answer! As I am using my MacII/8/170 just for private things and am just a poor student, I can't afford to by AUX and a 80MB hard drive. We will soon have AUX on some of the Macs at school and I will get your ray-tracer and try it out. Have a nice life, Sigge -------------- To: greg@hobbes.lbl.gov Subject: Radiance1R4.tar.Z Date: Thu, 18 Jul 91 11:42:39 EDT From: Scott Hankin Howdy - I've been trying to work with your latest release, and I appear to be missing some files. When I try to build the cubspace model, make tells me it doesn't know how to make "proof" which the model depends upon. When I try to run rview on anything, it fails because it can't open rayinit.cal. I can't find rayinit.cal in the tree, I deleted the distribution after expanding it, and I am reluctant to ftp it again to see if it was indeed in the distribution, but deleted by one of the many makeall clean's I did while getting things going. Can you help me out with these files? I'd really appreciate it. Thanks. - Scott Scott Hankin (hankin@osf.org) Open Software Foundation Date: Fri, 19 Jul 91 08:54:05 +0200 From: greg (Greg Ward) To: hankin@osf.org Subject: Re: Radiance1R4.tar.Z Dear Scott, Sure enough, the critical file "proof" was missing from the distribution. An over-zealous cleanup job on my part, I'm afraid. I've added it back in again -- thanks for bringing it to my attention. To save you from ftp'ing it again (though it's small), I'll send you the file in the next message. The rayinit.cal file (and other essential library files) come in the ray/lib directory of the distribution. They are not deleted by any cleanup procedure I wrote, but you may not have remembered to set the RAYPATH environment variable to tell the programs where to find this directory. The makeall script is supposed to do this automatically, but it only works if you tell it to go ahead and install the library in the location you select. You can always set the RAYPATH variable manually with a line in your .login file like so: setenv RAYPATH .:/installpath/ray/lib Where "installpath" is replaced with the place you installed the distribution. Hope this helps! -Greg To: "(Greg Ward)" Subject: Re: Radiance1R4.tar.Z Date: Fri, 19 Jul 91 09:47:43 EDT From: Scott Hankin It does indeed. Thanks for the info - things are up and running great even as we speak. It seems that in a moment of insanity (and a temporary shortage of disk space) I removed the ray/lib subtree - for some reason I had decided it was generated during the build process. I was obviously wrong. Thanks for the help, the software, the work it involved - thanks for everything. I never cease to be amazed at the effort folks like you will put into things they make available to the public. You are one of the heroes of learning. I know I will get a great deal out of using and examining Radiance. Keep up the terrific work! - Scott Scott Hankin (hankin@osf.org) Open Software Foundation ------------------- The following message is not specifically about Radiance, but it does get around a bug in the 1.4 release of x11image so take note. By the way, both x11image (now called just plain old "ximage") and xshowtrace have been fixed for the next release. Date: Sat, 20 Jul 91 12:23:12 PDT From: raja@robotics.berkeley.edu (Raja R. Kadiyala) To: robotics-users@robotics.berkeley.edu Subject: xdvi and openwindows Many have noticed that some programs such as xdvi and xfig do not work properly under openwindows -- they fail to accept input in the window. The fix is to tell the window manager to explicitely get input from the window this is done by putting the following lines in your .Xresourses/.Xdefaults/.Xdef (or wherever your applications resources are kept) xfig.Input: true xdvi.Input: true raja ---------------------- Date: Wed, 7 Aug 91 20:09:43 EDT From: chen@eleceng.ee.queensu.ca (Junan Chen) To: greg@hobbes.lbl.gov Subject: Radiance Status: RO Hi, Greg: Thanking you for your mail of July 31. I tried to grab Radiance at midnight, and successfully got everything I need. After I installed the Radiance, I found *rview* didn't get compiled. I also checked the *devtable.c*, and the default_driver is x11_init(), though I replied "no x10 support" during the installation. I modified default_driver of devtable.c to *sun*, but *make rview* still doesn't work properly. The other thing is how to specify the focal length with the command *rpict*. I checked the reference manual and relevant manual pages, but couldn't find any direct way to do that. Could you please give me some hint to my questions. BTW, we use a sparc-2 station with a 24-bit graphics adaptor which is compatiblewith CG8 and can be set up as 8-bit CG4 as well. Most of the time we use it as a 8-bit CG4 workstation. I really appreciate your help. Junan Chen Date: Thu, 8 Aug 91 09:46:06 +0200 From: greg (Greg Ward) To: chen@eleceng.ee.queensu.ca Subject: Re: Radiance Hi Junan, I have had this asked of me before, so I decided to make a little readme file explaining what to do if you don't have X11 support. I've attached it to the end of this letter. (When you answered "no" to X10 support, the script still assumed you had X11 support -- which is very different!) There is no adjustment for focal length, since the renderers do not have depth of field in their simple pinhole camera model. If you want this, you will have to add it yourself. [But see PINTERP topic below -G] -Greg -------------- This Radiance distribution assumes that you have X11 support (ie. a /usr/include/X11 directory and /usr/lib/libX11.a library). If this is not the case, you will have to make a couple of changes to the files in the src/rt directory to make "rview" compile properly. If you are a thorough person, you can also make changes to the Makefile's in the src/util and src/px directory to avoid some other spurious but unimportant errors. The following diffs should be applied to Makefile and devtable.c in the src/rt subdirectory: ============= rt/Makefile ============= 35c35 < DOBJS = devtable.o devcomm.o editline.o x11.o x11twind.o \ --- > DOBJS = devtable.o devcomm.o \ 37c37 < DSRC = devtable.c devcomm.c editline.c x11.c x11twind.c \ --- > DSRC = devtable.c devcomm.c \ 39c39 < DLIBS = -lX11 --- > DLIBS = ============= rt/devtable.c ============= 15c15 < char dev_default[] = "x11"; --- > char dev_default[] = "sun"; 17,18d16 < extern struct driver *x11_init(); < 23c21 < {"x11", "X11 color or greyscale display", x11_init}, --- > {"x11", "X11 color or greyscale display", comm_init}, These changes may be applied to the Makefile's in the src/util and src/px subdirectories for cleaner compilation: =============== px/Makefile ============= 19c19 < ra_t8 ra_bn ra_t16 pcomb pinterp ximage xshowtrace pflip --- > ra_t8 ra_bn ra_t16 pcomb pinterp xshowtrace pflip =============== util/Makefile ============ 13c13 < PROGS = makedist swaprasheader findglare xglaresrc glarendx --- > PROGS = makedist swaprasheader findglare glarendx ========================================================= PINTERP Uses for the Pinterp Program From: Frank Bennett Subject: Radiance To: greg@hobbes.lbl.gov Date: Tue, 20 Aug 91 9:46:15 MDT Greg: I just picked up Radiance. Look's interesting. I found the following missing: obj/model.new/rayinit.cal obj/cabin/tree.rad - landscape wants to instanciate tree.oct I have done alot yet, did go back & pick up some .pic files & pub/objects/gjward.tar.Z I have not been able to assertain (from Raytracing News or your package) whether you generate "lit" polygons with soft shadows, which you can then walk through. The advantage of a Radiosity system is you only need to recompute the sceen if the lights or objects are moved, but not for camera moves. good work, Frank Bennett - Hewlett Packard P.S. a plug: Our new Snake CPU love to raytrace, the aquarium from: ftp.ee.lbl.gov:RAY/aq.tar.Z Spark2 52 hours Cray YMP 18 hours HP9000/720 17.5 hours HP9000/750 13 hours Date: Wed, 21 Aug 91 08:26:06 +0200 From: greg (Greg Ward) To: fwb@hpfcfwb.fc.hp.com Subject: Re: Radiance Hello Frank, It sounds like you need to set the environment variable RAYPATH to the location of the library directory because it's not in the default location /usr/local/lib/ray. The README file should explain it, but basically you need a line in your .login like: setenv RAYPATH .:/my/radiance/path/lib Then the problems you mentioned should go away. As for soft shadows, the default options of rpict produce sharp shadows, but you can set the following if you want soft shadows: -sp 1 -dj .5 The -sp 1 value turns off image plane sampling, so the renderings will take substantially longer. Also, if you don't do any anti-aliasing by running the result through pfilt and reducing the resolution, the shadows will appear noisy. I am aware that soft shadows and walk-through animations are big advantages of radiosity methods, but then you're stuck with simple polygonal scenes and diffuse surfaces, so it's a tradeoff. I have implemented a z-buffer interpolation program, pinterp, for generating walk-through animations that makes ray tracing a reasonable way to go, actually. It would be nice to store all that shadow information somehow in the scene, but the memory requirements are daunting. We're running these programs on small machines, too you know. Thanks for your input. -Greg Date: Wed, 14 Aug 91 17:24:34 -0400 From: hr3@prism.gatech.edu (RUSHMEIER,HOLLY E) To: greg@lesosun1.epfl.ch For our computer vision project, we are using Radiance to model a finite size pinhole by making lots of images from different points in the pinhole and then adding them up, accounting for the shifts in pixel location. Apart from writing new code, is this the only way to do this? Holly Date: Thu, 15 Aug 91 09:41:30 +0200 From: greg (Greg Ward) To: hr3@prism.gatech.edu Subject: looking at the world through a pinhole Hi Holly, Regrettably, I can't think of any more direct way to do pinhole sampling than what you're doing already. However, there is a way you can do it a little faster, I think. Generate an image from the center using rpict with the -z option to generate a z-file. Then, use pinterp with the following options to produce images at different points like so: % rpict [opts] [view] -x xres -y yres -z scene.z scene.oct > scene.pic % pinterp -vf scene.pic -vp x1 y1 z1 -vs h1 -vl v1 -x xres -y yres \ -ff -r "[opts] scene.oct" scene.pic scene.z > scene1.pic Pinterp just moves the pixels around according to the new viewpoint using a z-buffer approach and calling on rtrace (in this case) to compute pixels it cannot find. The only problem with this technique is that it does not correctly follow specular reflections in the new image, so if you are trying to see depth of field in a reflection you need to wait for rpict. Also, you can use the view shift and lift parameters to do the pixel shifting for you. You just have to compute the appropriate values based on the distance to your pinhole's image plane. I would work out the formulas for you, but I'm sure I'd make some dumb error. Finally, I would recommend using pcomb to add the images up if you aren't using it already. You can give a scalefactor of 1/N for each image and when you pass it through pfilt later you should get the correct radiance values. Let me know if I can be of any help, and thanks very much for sending the report and the announcement. -Greg ~s Radiance Digest v1n5 Dear Radiance Users, For those of you still with us after last week's fiasco, here is a culling of electronic mail exchanged between me and some of you over the last month. Once again, I have given headings by subject to make it easier to browse without reading everything. A couple of reminders. First, you may pick up previous issues of the Radiance Digest at hobbes.lbl.gov (128.3.12.38) with anonymous ftp from the pub/digest directory. Second, you must write to me if you want your name removed from this mailing list -- please take a few minutes now to express your outrage at getting such unwelcomed junk mail so that the tension doesn't build up and make the blood vessels on your forehead bulge out in an unbecoming fashion. I particularly recommend looking at the section on the Radiance picture format to anyone interested in translating Radiance images. -Greg MAC - MacIntosh and Radiance PICTURE - Radiance Picutre file format HPUX - Hewlett-Packard UNIX and Radiance 1.4 DAYLIGHT - Radiance and Daylight Simulation AMIGA - Radiance on the Amiga 2000 COLORPICT - Using the Colorpict primitive DOS - Radiance under DOS? RVIEW - Rview and memory LIGHTS - Non-standard Light Sources ============================ MAC MacIntosh programs with Radiance and A/UX Date: Fri, 23 Aug 91 13:07:34 NZT From: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: Fractal Landscape generator To: GJWard@Csa2.lbl.gov I've just written the "old fractal landscape" generator using the subdivision technique (as opposed to spectral techniques). A Mac II application! It does the following: - surface parameters, x,y range and typical z variation - roughness parameter - sea (lake) level - seed specification, lets the same landscape be generated on request - 3 point colour ramp mapped to height - DXF, Super3D, RayShade, and Radiance - view on the Mac screen of wireframe coloured model - variable resolution up to 256x256 cells - automatic triangulation of non-planar facets ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz Date: Sun, 25 Aug 91 15:54:59 EDT From: David Brainard To: greg@lesosun1.epfl.ch Subject: Re: Mac version I see. I will definitely have a Big Mac (as it were) and am trying to decide whether to load it with A/UX or not. You are the first person I've had any contact with who seems to be using it. Do you find it a reasonable Unix? Do you find that native Mac applications run on top of it without too many problems? My prior was that running Unix on the MAC would provide the worst of both worlds, rather than the best. But I am willing to be convinced otherwise. Thanks for any advice you can give. David Date: Mon, 26 Aug 91 09:40:37 +0200 From: greg (Greg Ward) To: brainard@cvs.rochester.edu Subject: Re: Mac version Hi David, My advice is free, and it's worth what you pay for it! I won't claim that A/UX is the best of both worlds. I have certainly had my share of frustrations trying to get around little annoying compiler bugs and poor virtual memory performance on the UNIX side, but I've dealt with worse UNIX implementations, certainly. The worst thing I can say about A/UX is that it's a System V derivative, with all the associated problems. Berkeley, Berkeley, yeah, yeah, yeah! As for the Mac side, I haven't tried it out with every software package. I have tried Microsoft Word 4.0, which seems fairly stable, MacDraw II, Adobe Photoshop (excellent software in my opinion), Aldus Freehand, Studio/8, Versaterm, Mathematica, HyperCard, and a few others I can't think of right now. I haven't tried Excel or any database systems, but WingZ seems to work (though I haven't used it really at all). Programs that I've tried which failed are MacWrite, SuperPaint and Architrion II. Sometimes you'll get a warning if the 32-bit clean bit isn't set on the application and it will run anyway, but only if the code actually is 32-bit clean and they just forgot to set the bit, as is the case with HyperCard 2.0. In general, 24-bit applications don't work, even in the special 24-bit mode A/UX Finder. I don't see this as being much of a drawback, since all applications have to be 32-bit clean to run under System 7, anyway. The other loss (that I don't really consider a loss) is all the OS utilities and INIT's and so forth that work under the MacOS. You will be very sorry if you install any of these under A/UX. Also, specialty programs for formatting hard drives and other hardware-specific tasks must be run under the regular OS. Keep in mind that installing A/UX doesn't mean giving up your regular MacIntosh. All you really give up is about 100 Mbytes of disk space somewhere. You can still run without A/UX, only booting it when you have some UNIX program you need to run. Also, since A/UX is cognizant of the Mac volumes, it's not necessary to install your Mac applications or data in two places. In conclusion, I think the A/UX development team have done a decent job getting these two very different systems to cooperate with each other, and I've been generally satisfied with the improvements that come with each new release -- something I can't say for the Sun operating system! -Greg [A sad postscript to this message -- it seems that the new Adobe Photoshop version 2.0 doesn't like A/UX.] ================================ PICTURE Radiance Picture Format Date: Fri, 13 Sep 91 16:56:02 NZT From: russells@ccu1.aukuni.ac.nz Subject: Radiance picture format To: greg@lesosun1.epfl.ch Hi, Can you supply the format of the Radiance picture files, as produced by rpict etal. I am planning to write an Macintosh application to display them, using MacTCP to bring them from the Unix system without intermediate FTPing and conversions. Also, is it possible for Radiance to do parametric models. For instance you set a variable and use instances of that variable throughout your model files. This would be good for my Super 3D (a Mac modeller) to Radiance translator for handling "one-pixel wide" lines. Thanks in advance, Russell Street russells@ccu1.aukuni.ac.nz (arch2 in a former life) Date: Fri, 13 Sep 91 09:19:31 +0200 From: greg (Greg Ward) To: russells@ccu1.aukuni.ac.nz Subject: Re: Radiance picture format Hi Russell, Thanks for writing the Super 3D translator. Paul just wrote to me about it, but I haven't had a chance yet to try it out. I need to find a copy of Super 3D first! At the end of this mail I will put a shar file of the routines you need to read and write Radiance pictures. The format has been enhanced slightly for the next release (in an upward compatible way), so you should definitely use these newer routines. The file format, like most binary files used by Radiance, contains an ascii information header that is terminated by an empty line. This header typically contains the commands used to generate the file along with variables indicating exposure, view parameters, and so on. Next there is a single line that indicates the resolution and pixel scanning order of the image. For Radiance pictures, the pixels are order as English text, left to right and top to bottom. This is indicated with a line of the form: -Y M +X N Where M and N are the y and x resolutions, respectively. The x and y image coordinates are always the same, starting with (0,0) at the lower left corner, (N,0) at the lower right, and (0,M) at the upper left. The y resolution appears first in our specification because it is the major sort, and is preceded by a minus sign because it is decreasing in the file. Finally, the floating point scanlines follow. Each pixel is represented by at most 4 bytes. The first three bytes are the red, green and blue mantissas (in that order), and the fourth byte is a common exponent. The floating point color (R,G,B)=(1.,.5,.25) would be represented by the bytes (128,64,32,129). The conversion back to floating point is possible using the ldexp() library routine, or it's better to use the colr_color() routine included in color.c. The scanlines are usually run-length encoded. My previous scheme (release 1.4 and earlier) used a simple count for repeated pixels. My new scheme is more complicated and encodes the four components separately. I don't recommend writing your own routine to decode it -- use what's in color.c. A skeletal program to read a Radiance picture file and convert to 24-bit gamma-corrected color looks like this: #include #include "color.h" main(argc, argv) int argc; char *argv[]; { char *fname; /* Radiance input file name */ FILE *fp; /* input stream pointer */ int xres,yres; /* x and y image resolution */ int y; register COLR *scanin; register int x; if (argc < 2) { fp = stdin; fname = ""; } else if ((fp = fopen(fname=argv[1], "r")) == NULL) { perror(fname); exit(1); } if (checkheader(fp, COLRFMT, NULL) < 0 || fgetresolu(&xres, &yres, fp) != (YMAJOR|YDECR)) fprintf(stderr, "%s: not a Radiance picture\n", fname); exit(1); } if ((scanin = (COLR *)malloc(xres*sizeof(COLR))) == NULL) { perror(argv[0]); exit(1); } setcolrgam(2.2); /* set appropriate gamma correction */ for (y = yres-1; y >= 0; y--) { if (freadcolrs(scanin, xres, fp) < 0) { fprintf(stderr, "%s: read error\n", fname); exit(1); } colrs_gambs(scanin, xres); for (x = 0; x < xres; x++) { /* * Do something with the bytes: * scanin[x][RED] * scanin[x][GRN] * scanin[x][BLU] */ } } free((char *)scanin); exit(0); } You will find all the routines you need in ray/src/common. The checkheader() routine is in the module header.c, fgetresolu is in resolu.c, freadcolrs() is in color.c, and setcolrgam() and colrs_gambs() are in the module colrops.c. If you want to convert the file to 8-bit color, the process is quite a bit more complicated. I suggest you take a look at the ra_pr program in the ray/src/px directory to get a better idea of what is involved. For communicating with another program across MacTCP, I suggest reading and unpacking the scanlines on the remote (UNIX) host and transferring fixed-length packets (perhaps a scanline at a time) to your MacIntosh display program. As for a macro facility to replace variables with values in a Radiance scene file, Paul asked me also about this earlier in the context of pixel-thick lines. You can certainly use a macro program such as cpp or m4 to replace variables with values in a file, but the real problem is that pixel-thick lines do not exist. They should be converted to cylinders with a non-zero radius corresponding to the physical objects the lines were meant to represent. This may end up being a pixel wide in the final image, but usually it is not. Radiance actually has no notion of pixel size in its rendering routines, so variable replacement with the pixel size is impossible anyway. -Greg ------------------------------ #! /bin/sh # This is a shell archive, meaning: # 1. Remove everything above the #! /bin/sh line. # 2. Save the resulting text in a file. # 3. Execute the file with /bin/sh (not csh) to create: # color.h # header.c # resolu.c # color.c # colrops.c # This archive created: Fri Sep 13 09:19:19 1991 [Deleted for the sake of brevity.] ============================================ HPUX Hewlett-Packard UNIX and Radiance 1.4 From: jaf@beauty.graphics.cornell.edu (James A. Ferwerda) Subject: Radiance installation on HP9000/835 To: GJWard@Csa2.lbl.gov Date: Thu, 29 Aug 91 14:53:05 EDT Hi, I'm a potentially new user of Radiance, but I'm having a little trouble getting it to compile on my HP9000/835. Included below is a copy of the output of the makeall script. I'm writing to you for assistance before I make any major changes to the distributed version because I'd like to remain compatible with any updates or bug fixes, and because I figure you know your software better than I ever will and might know a quick fix which would save me hours of hacking. I compiled the package using the "HP workstation" option in the makeall script, and so far the only code change I've made was to change the line #include in malloc.c to #include because there is no var.h in usr/include/sys on the HP system. I'm not a hardware or a systems person (or even a very good programmer) but to the best of my knowledge the HP9000/835 is a RISC based machine running a modified version of System V. (But the makeall script gave similar messages when I used the "Other" option and indicated a RISC based non-BSD machine). Any insights you can give me on how to get the package up and running on my HP machine would be greatly appreciated. Radiance looks like a great tool for my work on surface lightness/ illumination perception and I'd really like to be able to use it. By the way, I first heard about Radiance from Holly Rushmeier from Georgia Tech who gave it rave reviews. Thanks in advance for any help, and keep up the good work. -Jim Ferwerda jaf@graphics.cornell.edu Date: Fri, 30 Aug 91 10:38:00 +0200 From: greg (Greg Ward) To: jaf@beauty.graphics.cornell.edu Subject: Re: Radiance installation on HP9000/835 Hello Jim, OK, so I admit that I haven't actually compiled the distribution on all these different machines. I'm still running on a Sun 3/60 myself. Thanks for sending me the compilation errors. It really is the best way for me to correct portability problems in the code. I recommend the following changes: 1) Change the #ifdef lines to #ifndef BSD at: line 78 of src/cv/dxfcvt/config.h and line 22 of src/cal/calc/calc.c 2) Change the COMPAT=malloc.o to COMPAT=bmalloc.o in: src/ot/Makefile and src/rt/Makefile These fixes will appear in the next release, thanks to you. -Greg From greg Thu Sep 5 11:03:53 1991 Date: Thu, 5 Sep 91 11:03:52 +0200 From: greg (Greg Ward) To: jaf@beauty.graphics.cornell.edu Subject: Re: Radiance installation on HP9000/835 Status: R Hi Jim, I took a look at the core and oconv is hitting a bus error in the scanf() procedure. I suspect the problem is memory alignment, and that the HP is a RISC machine and we need to define ALIGN=double for bmalloc to be compiled properly. I recommend doing this manually and I will correct the entry for HP workstations in the makeall script for the next distribution. Do: % cd ray/src/ot % rm bmalloc.o % cc -O -DALIGN=double -c bmalloc.c % cd ../rt % rm bmalloc.o % cc -O -DALIGN=double -c bmalloc.c Then rerun makeall from the top and the fixed compilations of bmalloc.o will be included. Hopefully, this will fix your problems. -Greg P.S. I didn't do it myself because of file permissions obviously. From daemon Fri Sep 6 00:24:34 1991 From: James A. Ferwerda Subject: Success! To: greg@lesosun1.epfl.ch Date: Thu, 5 Sep 91 18:26:33 EDT Mailer: Elm [revision: 64.9] Status: R Greg, Thanks for the fixes. I'm off and running, working my way through the tutorial. Thus far I'm really impressed with how comprehensive your package looks and how easy it's been to get things to come up. You've really done a great job. I'll keep you posted on how things are coming along. Thanks. -Jim From: nfotis%theseas.ntua.gr@Csa2.lbl.gov (Nikolaos) Subject: Troubles with Radiance1R4 To: gjward@Csa2.lbl.gov (Greg Ward) Date: Sat, 7 Sep 91 8:33:14 EET DST Dear Mr. Ward, I just set to play with Radiance, but with bad results (on a HP-9000/720. Here's its `uname -a` output: HP-UX kentayro A.B8.01 A 9000/720 29904182 ) The first trouble: malloc.c doesn't compile with the 5th option of makeall: cc: "malloc.c", line 270: error 1588: "v_pageshift" undefined. cc: "malloc.c", line 270: error 1531: Invalid member of struct or union. *** Error code 1 I did the following changes: -- #ifndef NOVMEM #ifndef BSD /* For HP-UX 8.01, I changed the line: #include to: #include ARRGH!! This damned machine has not .v_pageshift structure!?! . I have checked the /usr/include directory, and much to my horror, I have found various page sizes: /usr/include > fgrep "page size" * a.out.h:#define EXEC_PAGESIZE 4096 not always the same as the MMU page size /usr/include/sys > fgrep "page size" * sys/framebuf.h: locked page size = 2 ** 11 sys/unistd.h:# define _SC_PAGE_SIZE 3001 PAGE_SIZE: Software page size /usr/include/machine > fgrep "page size" machine machine/param.h:#define NBPG_PA83 2048 2kb page size for PA-RISC 1.0 */ /* So I try with the following test: */ int getpagesize() { /* I think that machine/param.h has the right number */ #include return(NBPG_PA83); /* This is supposed to be ok for PA-RISC 1.0, but I don't know about 1.1 (i.e. Snakes) */ } /* I played with various combinations, but the examples on "Radiance tutorisl" keep crashing (the examples with rview give bus error/core dumps. Fighting with HP's debugger showed that the program was inside readobj.c, immediately after line 165 : 165 if (fscanf(fp, "%lf", &fa->farg[i]) != 1) #ifdef REALSYSV int getpagesize() /* use SYSV var structure to get page size */ etc... That's all I can do for the moment (I should go to the bed, you see... ) Have a nice weekend, Nick. -- Nikolaos Fotis National Technical Univ. of Athens, Greece 16 Esperidon St., UUCP: mcsun!ariadne!theseas!nfotis Halandri, GR - 152 32 or InterNet : nfotis@theseas.ntua.gr Athens, GREECE FAX: (+30 1) 77 84 578 Date: Thu, 12 Sep 91 09:11:54 +0200 From: greg (Greg Ward) To: nfotis%theseas.ntua.gr@Csa2.lbl.gov Subject: Re: Troubles with Radiance1R4 Yes, someone else has had trouble with HP workstations and my version of malloc. The truth is, you don't need my version of malloc and it would probably make life simpler if you replaced the appearance of malloc.o with bmalloc.o in the Makefiles of the ot and rt directories. Also, you should compile bmalloc.c with the option -DALIGN=double for RISC architectures, something I should have included for HP workstations in the makeall script but didn't out of ignorance. I recommend compiling bmalloc.c by hand in the ot and rt subdirectories, replacing malloc.o with bmalloc.o in the associated Makefile's, and rerunning makeall afterwards. I hope that this fixes your problems. -Greg ======================================== DAYLIGHT Radiance and Daylight Simulation Date: Wed, 4 Sep 91 11:57:27 Z From: Environmental Design Unit To: greg@lesosun1.epfl.ch Subject: Radiance Dear Greg, I have some more questions about Radiance and daylight factor calculation. a) What is the latest version of Radiance currently avialable, and how does is it differ from v1.3.1 with respect to df calc? b) Any joy yet in dealing with adjacent spaces? c) Is there a way of modelling external structures so that both the direct AND diffuse daylight entering a space is modified? (A "fudge factor" in the sky model?). d) Specular reflections from (pseudo?) glazing elements - are they a function of incidence angle? (Reflections at grazing incidence dominate for deep-well atria with glass 'walls'). Thanks in advance. -John Mardaljevic ps. I realise I might be asking some difficult questions. pps. It might not be a bad thing to warn any Radiance users moving in the direction of df calcs about the dangers of using insufficiently small source polygons for window elements (df>100% !!). Date: Wed, 4 Sep 91 15:53:25 +0200 From: greg (Greg Ward) To: edu@leicester-poly.ac.uk Subject: Re: Radiance Hi John, a) I think it's really the next release of Radiance that you want. Version 1.4 has relatively few advantages in the daylight area over 1.3.1. The release on which I am currently laboring (probably 2.0), has many more of the calculation capabilities that are needed for difficult daylight modeling and analysis. In particular, there is a daylight factor calculation and visualization script that produces contour plots of the workplane (hallelujah) and additional capabilities built into Radiance itself for the simulation of daylight reflected from mirrored surfaces and so on. b) I have not tried it out yet, but I think Radiance is now ready to tackle adjacent spaces to atria. The key is a new program called mkillum that calculates in a separate pass the distribution of light from windows or other such "secondary" light sources. In the process, mkillum accounts for all interactions including external obstructions and interreflections. c) In older releases of Radiance, the only way to account properly for external obstructions (other than computing the distribution yourself) is to use the interreflection calculation to compute the contribution from the window, ie. do not make the windows into type illum. This is even still the best method for offices with very large windows. (Avoiding the problem you mentioned in your P.S.) d) Radiance surfaces obey Fresnel's laws where reflection goes to one and transmission goes to zero at grazing for specular surfaces. However, to properly account for reflected sunlight from atria walls, you must use the next release of Radiance with code for finding virtual light sources. Previous releases cannot find secondary rays from such tiny sources as the sun. If you want to be a beta test site for version 2.0, you need only ask. I would be happy to put together a current distribution for you to test. I plan to make an official release near the end of this year. -Greg ======================================= AMIGA Radiance on the Amiga 2000 Date: Thu, 29 Aug 91 13:51:24 MED From: bojsen@dc.dth.dk (Per Bojsen) To: greg@hobbes.lbl.gov Subject: Distribution of a port of the RADIANCE package Greg, I'm currently porting your RADIANCE package to the Amiga. I would like to distribute at least the binaries along with the data files necessary, but preferably the whole package including the patched sources. Is it permitted to redistribute the package with pacthed sources? If not, what parts of the package may be redistributed, if any? -------------------------------------------------------------------------------- Per Bojsen The VLSI Research Group EMail: bojsen@dc.dth.dk MoDAG Technical University of Denmark -------------------------------------------------------------------------------- Date: Thu, 29 Aug 91 14:02:57 +0200 From: greg (Greg Ward) To: bojsen@dc.dth.dk Subject: Re: Distribution of a port of the RADIANCE package Hello Per Bojsen, First off, let me thank you for porting the software. I haven't used an Amiga myself, but I like what I've seen on it and it sounds like a great value. Could you tell me a little about your experience bringing the software over? Did you have to throw much out? Do you have a display driver? Did you get rview to work? Since no one has asked to redistribute a modified version of Radiance before, I think I will have to ask around and find out if it would be acceptable. Our main concern I suppose is protecting the reputation of Lawrence Berkeley Lab (if it has one) against irresponsible changes. I'll try and get back to you by the end of next week. -Greg Date: Wed, 4 Sep 91 15:42:26 +0200 From: her%compel.dk%dkuug.dk@Csa2.lbl.gov (Helge Egelund Rasmussen) New RADIANCE 1R4 user... Mail address: Helge E. Rasmussen Compel A/S Hvidovrevej 80 DK-2610 Roedovre Denmark Phone: +45 36 72 33 00 E-mail: her@compel.dk Machine: 386, Amiga System: Interactive Unix, AmigaDos Application: Hobby, I've been working with lots of different renderes on the Amiga. I'm in the process of porting Radiance to the Amiga. At the moment, nearly everything works except programs which use the pipe system call (f.ex. pinterp). I've created a rview device driver for the Amiga HAM-E 'framebuffer', and created a picture converter to the Amiga IFF24 bit graphics format. At the moment, I'm working on a converter which can convert Imagine objects and scenes to Radiance scenes (Imagine is a commercial Amiga based render/animation package which has a rather good 'triangle' based 3d editor). I can upload the patches for the Amiga version to hobbes if you are interested. Date: Wed, 4 Sep 91 17:17:48 +0200 From: greg (Greg Ward) To: bojsen@dc.dth.dk, her@compel.dk Subject: Re: Distribution of a port of the RADIANCE package Dear Per Bojsen and Helge Rasmussen, Thank you both for your work porting Radiance to the Amiga. It is a shock to me that anyone has attempted this, let alone two people from the same country at the same time! I have asked those in the know at LBL about the official policies on redistribution of software, and there don't appear to be any. Therefore, I am going to make up my own policies, which I hope will be acceptable to everyone. First off, I don't think we really need TWO ports of Radiance to the Amiga, so I would like the two of you to have a little discussion and fight it out among you to decide which and what to include in a distribution. Second, I would like hobbes.lbl.gov to be used as the distribution site, at least for the time being. Before the end of the year, I hope to set up a site here in Switzerland for distribution on the European continent that is identical to the one in California. I have created a new directory in the anonymous ftp pub/ports subdirectory called "amiga". I would like it very much if you (collectively) would deposit your patches for the Amiga plus any driver programs or routines you have written. Please do not duplicate the main distribution as I would like people to continue to draw from the original for the sake of future compatibility. Please also include a README file describing the contents of your directory so that other folks who are not so gifted (including myself) can figure out what to do with it. Third, I don't really want the Amiga binaries all stored on hobbes because it would put quite a burden on my disk space and potentially on the network if everybody and his brother wants a copy. Therefore, I would be delighted if you would distribute the executables yourself to interested parties via floppy disk or whatever transfer medium you prefer. (I have been using floppies myself to distribute the MacIntosh A/UX version.) I have no objection if you want to redistribute the source code as well, so long as you make it clear what version you are distributing and that the main site has the most recent version. Thanks again guys. Bravo! Good work! (And all that.) -Greg Date: Mon, 9 Sep 91 17:36:40 MED From: bojsen@dc.dth.dk (Per Bojsen) To: greg@lesosun1.epfl.ch Cc: her@compel.dk Subject: Distribution of a port of the RADIANCE package Hello Greg, > Thank you both for your work porting Radiance to the Amiga. It is a shock > to me that anyone has attempted this, let alone two people from the same > country at the same time! > In fact it is a surprise for me too! I don't know Helge personally, but I have seen him appear on USENET. > First off, I don't think we really need TWO ports of Radiance to the > Amiga, so I would like the two of you to have a little discussion and > fight it out among you to decide which and what to include in a distribution. > I agree with that. The discussion has started. One thing we have to agree upon is whether we will support old versions of the Amiga operating systems, or only the newest. As soon as we have compared our ports and agreed upon the patches we'll let you know! Per. ======================================== COLORPICT Using the Colorpict primitive Date: Wed, 4 Sep 91 15:01:52 -0700 From: chet@cs.uoregon.edu (Chet Haase) To: GJWard@Csa2.lbl.gov Subject: Colorpict function Hi. I'm trying to get the Colorpict pattern to work right now and can't seem to manage it. I see it being used in example pictures (such as model.new), but the source for those pictures is not in our distribution (1.2?). Is there more documentation or examples elsewhere that I could get ahold of? The main error that's occurring is: rview: rfuncname: undefined function, where rfuncname is the rfunc I've defined in the .cal file (which it is finding). But then, I'm not really sure what I should be using for these functions without a good example to work from (all I want to do is a straight mapping of the image onto a polygonal surface in another image, so I'm not sure what parameters I should be using), so the source of the problem may be elsewhere. Thanks for your help. Incidentally, I've been using your Architrion translator that you sent me help on last Fall and it works great. Thanks. Chet Haase CIS Department University of Oregon Date: Thu, 5 Sep 91 10:28:04 +0200 From: greg (Greg Ward) To: chet@cs.uoregon.edu Subject: Re: Colorpict function Hi Chet, You've discovered the worst documented part of Radiance -- function files! One day, I hope to make all this clear to people, but as you can see it is very muddy water. First off, rfuncname is just an example to get you to pick your own name as appropriate for your material. Each of the primary color functions is a function of the three input primaries, thus allowing any mapping. A straightfoward mapping (as defined in rayinit.cal) is: red(r,g,b) = r; green(r,g,b) = g; blue(r,g,b) = b; You may want to use the clip_r, clip_g and clip_b functions instead to prevent reflectances greater than one (a no-no in any physical simulation). An example of this should be contained in the file "picture" in the directory model.new. The actual files used, picture.cal and pine.pic, should be found in the Radiance library location (ray/lib in the distribution). The additional arguments at the end of the colorpict define a transformation to get from the world coordinates to the coordinates desired for the picture, pic_u and pic_v. If you can't find the files or have other specific questions, I'll be happy to help. Otherwise, you could tell me exactly what you want and I could do it for you as the most appropriate example. -Greg Date: Thu, 5 Sep 91 13:27:40 -0700 From: chet@cs.uoregon.edu (Chet Haase) To: GJWard@Csa2.lbl.gov Subject: colorpict revisited I couldn't find any of the examples you listed except picture.cal and the picture only for model.new; I couldn't find the model.new directory or the pine file in either the radiance directory on the system or the latest distribution tape we have (1.3.1). So while I kind of understand what I'm supposed to do, I'm apparently not getting the format or usage right because I keep getting the same errors. I'll describe my situation a bit more clearly (hopefully): I've got a picture called rgb.pic of a monitor screen and I'd like to map it into a scene in which I've defined a monitor. Using the functions in rayinit.cal and picture.cal as models, I've defined my own functions in rgbpic.cal: { rgbpic.cal } clip_r(r,g,b) = min(r,1); clip_g(r,g,b) = min(g,1); clip_b(r,g,b) = min(b,1); rgb_u = 1; rgb_v = 1; (Note - these are perhaps silly functions for doing what I want, but at this stage I just want to get the thing to compile) I then try to map the picture with colorpict using this .cal file like so void colorpict rgbpic 7 clip_r(0,0,0) clip_g(0,0,0) clip_b(0,0,0) rgb.pic rgbpic.cal rgb_u rgb_v 0 0 rgbpic plastic rgbimage 0 0 5 1 1 1 .05 0 rgbimage polygon rgbpicture 0 0 12 .13 .481 .13 1.21 .481 .13 1.21 .481 .96 .13 .481 .96 When I attempt to run rview on the .oct file, however, I get the error: rview: clip_r(0,0,0): undefined function Since it does find the .cal file (I've got RAYPATH defined correctly), I don't understand why it's not using the function I've defined. SO, the main problems I'm having are: 1) the above function error and how to avoid it 2) what functions/values I should be using for this purpose - I don't really want to do anything funky with textures or colors, I simply want to map the picture directly over what's in the scene. If that model.new file would show me more about how to do this, I'd love to see it. The closest example I've found here is the source for the tennis ball picture, but it wasn't quite enough for me to get the colorpict usage down... Thanks for your help. Chet. Date: Fri, 6 Sep 91 10:05:26 +0200 From: greg (Greg Ward) To: chet@cs.uoregon.edu Subject: Re: colorpict revisited Hi Chet, I'll offer the following changes to the file, and hopefully you can get this to work. ---------------------- rgbpic.cal: { rgbpic.cal } clip_r(r,g,b) = min(r,1); clip_g(r,g,b) = min(g,1); clip_b(r,g,b) = min(b,1); rgb_u = (Px-.13)/(.96-.13); { was rgb_u = 1; } rgb_v = (Pz-.13)/(.96-.13); { was rgb_v = 1; } ---------------------- Radiance file: # # Took out the arguments for the following: # void colorpict rgbpic 7 clip_r clip_g clip_b rgb.pic rgbpic.cal rgb_u rgb_v 0 0 rgbpic plastic rgbimage 0 0 5 1 1 1 .05 0 rgbimage polygon rgbpicture 0 0 12 .13 .481 .13 1.21 .481 .13 1.21 .481 .96 .13 .481 .96 ------------------------- The coordinate mapping I have made is based on the location, orientation, and size of the polygon in your file. If any of these things change, you will have to change the coordinate mapping. The simpler way to do this is to use picture.cal and add a transformation to the colorpict primitive, but since you're just trying to get it to work for now, I wanted to stay as close to your original as possible. The scalefactors for pictures is determined based on the aspect ratio. I'm assuming that your picture has square pixels and will fill the polygon you have supplied, thus the horizontal (x) dimension is larger. Quoting from the Radiance reference manual (ray.1): The dimensions of the image data are determined by the picture such that the smaller dimension is always 1, and the other is the ratio between the larger and the smaller. For example, a 500x338 picture would have coordinates (u,v) in the rectangle between (0,0) and (1.48,1). Hope this works! -Greg Date: Fri, 6 Sep 91 16:31:05 -0700 From: Chet Haase To: greg@lesosun1.epfl.ch Subject: Re: colorpict revisited That was the kind of help I needed - problem solved. Thanks, Chet. =============================================== DOS Radiance under DOS? Date: Thu, 5 Sep 91 17:26:24 PDT From: Donald Yett To: greg@hobbes.lbl.gov Subject: Radiance questions Hi, I just grabbed the Radiance package and related files from the ftp site.. I do have a few questions in hand. 1). Has this ever been ported to (please don't flame me!) DOS? 2). Is there a translator to convert the output to (yea I know it's limited format) GIF? In this day and age of 40-MIPS / 160 MFLOPS PC's I think it is a valid question, even though I don't condone people wasting their money just to run DOS! (Yea I said 160 MFLOPS! Although that board would cost the end-user about $15k...) Date: Fri, 6 Sep 91 09:22:26 +0200 From: greg (Greg Ward) To: dyett@phad.hsc.usc.edu Subject: Re: Radiance questions No, no one I know of has ported it to DOS yet, but there is now an Amiga version they tell me and I use it on the Mac under A/UX and plenty of folks have gotten it on their IBM RS/2 running AIX. You are not the first to ask me this question, but since DOS is limited in so many ways with virtually no standard graphics interface, I haven't felt it was worth my time to attempt a port. Even if I did port the software to a DOS platform that could handle it, I'then get hundreds of people coming to me with questions like, "I tried to get Radiance to run on my IBM AT and it said something about a memory error. Do I need a hard drive?" So, I don't think I'll be porting Radiance to MS-DOS in my lifetime. Any takers? I have done some work on translators lately, but have not written anything for GIF. I have now a Poskanzer Pixmap translator and one for the TIFF format, but gave up on Utah RLE format because it was too complicated and GIF because it's only 8 bits and rather nasty itself. My best advice is to pick up the pbmplus package which offers translation between many different image format "standards", including Targa and GIF, so you can get from Radiance to the format you want. The pbmplus distribution is available via anonymous ftp from export.lcs.mit.edu (18.30.0.238) in the file "contrib/pbmplus.tar.Z". Not everything works perfectly, but it's the best package I know of in the public domain. Personally, I prefer PhotoShop from Adobe, which does image manipulation as well as import/export from many formats. -Greg ========================================== RVIEW Rview and memory Date: Thu, 5 Sep 91 09:04:54 EST From: vanwyk@arc.cmu.edu (Skip Van Wyk) To: greg@lesosun1.epfl.ch Status: RO Greg, I got the conf model up and running. I have 16MB on this Sparc2. Though it is also configured as a server for 8 accounts, right now they are inactive. Anyway, the conf scene cooked away for about 2.5 hours before I went home last night; it looked good, even in 8bit. Sometime during the night, I got the message rview: system - out of memory in refine: Not enough memory *** Error code 2 What kind of memory do you have? And would everything have worked if I logged out? --Skip Date: Fri, 6 Sep 91 10:11:57 +0200 From: greg (Greg Ward) To: vanwyk@arc.cmu.edu Subject: rview and memory Hi Skip, The problem is that you shouldn't be using rview to do your rendering. You should use it to figure out what view you want, write it out with the view command, then use rpict with the -vf option to read the view and render the file in the background. Rview is meant to be a previewer, and uses up tons of memory as it goes to higher resolutions. I've made an enhancement for the next release that keeps rview from bombing when it runs out of memory, but it will still run out of memory if you haven't enough swap space on your disk. A simple example of an rpict command is: % rpict -vf myview.vp -av .04 .04 .04 electric.oct > myview.pic & Afterwards, you can logout and come back in the morning to see if the process is still running (using ps). The values for -av I selected for the conference room model in particular, and they would be different for a different scene. (I'm also assuming that you did a "view myview.vp" command in rview or you can use one of the provided view files in the vf subdirectory instead.) -Greg =================================================== LIGHTS Non-standard Light Sources Date: Sat, 7 Sep 1991 15:18 EDT From: elci@pluto.gs.com (Reha Elci) Subject: radiance 4.0 questions + problems To: GJWARD@Csa2.lbl.gov First, I'd like to say that this is really a great package for learning and production! Thanks for making this available on the net. I have a DECStation with a PXG card; and the only problem I had so far is that ximage does not work (probably does not recognize the visual); it produces no picture and a lot of XServer errors (bad value). Currently I do a pvalue the pic file into an rle file to display it. But I have a question as well; cylinders as light sources are not supported neither is "glow" applied to plastic or dielectric. So how does one go about doing neon lights or glowing rubys? Those examples would truly demonstrate the power of radiosity. Testing for a maximum radius for shadow testing (like you support on glow) would even make it better! Is there a work around? Thanks for all your help. Reha Elci PS: Please add me to the newuser list; the automatic mailing failed since my mail does not go thru to internet directly. Thanks. Date: Thu, 12 Sep 91 11:19:22 +0200 From: greg (Greg Ward) To: elci@pluto.gs.com Subject: Re: radiance 4.0 questions + problems Dear Reha, I am sorry but I don't think I can help you with your ximage problems. I have found it very difficult to debug such things remotely, and X11 servers seem to be particularly flakey when it comes to color. For example, I know very well that 24-bit color does not work properly in ximage, but I have no hardware to test it on and the hardware I have tried seems unreliable. I do plan to make cylinders usable as light sources in the near future, but until then the only way to model neon tubes is to break the tubes into many small polygons. You can use gensurf to help you with that. An example to create a tube of length 1 and radius .05: gensurf neon lamp '.05*sin(2*PI*t)' '.05*cos(2*PI*t)' 's' 20 6 You would then define the neon material using type glow with a maximum radius of .5 (if you only wanted to illuminate nearby objects): void glow neon 0 0 4 20 1 2 .5 As for glowing jewels, I'm not sure what you suggest is really physical. Gems usually "glow" because of light reflected many times within the stone and scintillation processes (in rubies for example). You can simulate this in Radiance by giving the red transmission coefficient for a dielectric a value greater than one. Note that giving values greater than one for all three coefficients would mean that the material was generating radiation -- which would be wrong in the case of non-radioactive materials. Hope this helps. -Greg ~s Radiance Digest v2n0 Hello Everyone, It's been a while since my last digest, judging by the amount of mail that's backed up. I hope that everyone has seen the 2.0 release announcement by now. A few of the enclosed messages are from people who were working with beta copies of release 2 and a few are from people who got the official release of 2, but most messages are from people who were still working with release 1.4. The order may seem a bit strange at times, as I was trying mightily to collect together some sensible categories. MODELING Textures, Surfaces and Lights IMAGES Image Translators and Animations GENERATORS Some new generator contributions LUMFACTOR Change in luminous efficacy factor MKILLUM New program for computing distributions AUTOCAD CMU work on a new AutoCAD translator MODELS Picking up and dropping off 3d models ART Radiance in the arts RS6000 Compiling Radiance on the IBM RS/6000 SUMANT Sumant Pattanaik's contributions NIGHTTIME Rendering night time images COMPILE Compile problems related to X11 and malloc.c OPENWINDOWS Some nice additions for Sun's Open Windows Any future mail to me should be addressed to GJWard@lbl.gov or greg@hobbes.lbl.gov, as I have officially returned from Switzerland. -Greg =================================================================== MODELING Textures, Surfaces and Light Sources Date: Wed, 18 Sep 91 10:55:54 +0200 From: her@compel.dk (Helge Egelund Rasmussen) To: greg@hobbes.lbl.gov Subject: Some Radiance questions Hi Greg, I have a few questions for you about Radiance, but first I'll mention a little about the current state of the Amiga port of Radiance. At the moment Per Bojsen has most of Radiance working on an Amiga 3000 with the latest version of the OS (2.0), while I work on an Amiga 2000 with an earlier version (1.3). There's major differences between two versions, and we've agreed to base the port on version 2.0 when it becomes available for the 2000 in the near future. Because of this, the Amiga port probably won't be available before October sometime. Nearly all Radiance programs work (including pinterp and rview), and I've rendered most of the models found on hobbes. Now for the questions: -I've created a 'cloud' pattern, and wanted to create an outdoor sceene with nice clouds in the background. My scene consist of a gensky source, and a big bubble with the cloud pattern as a colorfunc. The bubble is made of translucent material so that the gensky source and sun can pass it, and so that the cloud pattern is visible. I however have some problems with this setup, for instance it is possible for objects to make shadows on the sky! The radius of the bubble is 100, while the typical object size is 5. I haven't been able to create a larger bubble because oconv then can't subdivide the objects. Do you have any suggestions on how to do this kind of thing? -In the README file for the pod life model, you write: Thanks goes to Seth Teller, who wrote the patch modeler that made this all possible. Coming up with the correct patch parameters otherwise would have been a nightmare. Is this modeller available? (I don't like to have nightmares :-) -I'm currently working on an Imagine to Radiance object converter. Imagine is a commercial 3d modeller/renderer for the Amiga. In Imagine you have full control over color (r,g,b), reflectance(r,g,b), transmittance(r,g,b), specular reflection (r,g,b), index of refraction, roughness and a few other things. At the moment, I've hardcoded that certain intervals of the parameters lead to certain Radiance materials. I would prefer to use a configuration file instead, but the scheme for configuration files given in the converters directory is too limited. What I need is the possibility to say something like: if reflectance < 10 or roughness 100 then create plastic with same color as the Imagine object and roughness given by some formula. I've thought of using the calc routines for this, ie. writing a .cal script which choses material type and parameters from the Imagine ditto. Do you have any comments about this scheme? -I've created a modified version of the gensurf utility, which create an Imagine object instead of a Radiance object. The program use the Radiance calc library. I'd like to distribute this program (w. source) to other Imagine users, but I'm not sure that I may. So the question is: May I distribute the cal*.c source together with the new Imagine gensurf program? (I'll of course mention where the sources came from!) I hope that you have time to answer all these silly questions.. Helge --- Helge E. Rasmussen . PHONE + 45 36 72 33 00 . E-mail: her@compel.dk Compel A/S . FAX + 45 36 72 43 00 . Copenhagen, Denmark From greg Wed Sep 18 11:57:55 1991 Date: Wed, 18 Sep 91 11:57:54 +0200 From: greg (Greg Ward) To: her@compel.dk Subject: Re: Some Radiance questions Hello Helge, The delay in the Amiga port sounds to be worth the while. I appreciate the care you and Per Bojsen are taking to make things work properly. By the way, did you contact the other people interested in Radiance at the university where Per works? I am glad that someone is finally doing something with clouds. I have wanted to for some time, but haven't managed to squeeze it in. I recommend that instead of a bubble, you should apply the colorfunc pattern to the sky directly. Something like this should work: !gensky 6 17 12 skyfunc colorfunc skybright ( your arguments... ) skybright glow skyglow 0 0 4 .8 .8 1.2 0 skyglow source sky 0 0 4 0 0 1 180 Note that your function must not use the ray parameters Px, Py and Pz, since they are not defined for an infinitely distant object. You can use Dx, Dy and Dz, however. The value that you give is multiplied by the brightness of the sky as computed by gensky's skybright function. Where there are no clouds, your colorfunc should be (1,1,1), and inside a cloud it should be significantly greater than 1. You could probably get by with using a brightfunc instead of a colorfunc if all you want to model is white clouds. I don't know about the status of Seth's modeler or if he would be willing to share it. It was originally written as a demonstration program for the SGI IRIS workstation, so I doubt that it is very portable. Rather than listen to my speculations, though, you should write to Seth directly. His e-mail is seth@miro.berkeley.edu. He left some message about his being in Isreal until the 22nd, so there may be a slight delay in his response. As for the Imagine converter, translating material parameters is always difficult, especially when the original parameters are non-physical (ie. not energy-balanced). You can take a look at the nff2rad translator and see what I did there. I don't think rcalc would work in this case, since the logic is too complicated. I think a C program will probably be necessary. Please feel free to use whatever routines you like from the Radiance distribution. There are no legal problems as long as you do not resell the software as your own and turn a big profit. Recognition is always welcome. -Greg To: greg@hobbes.lbl.gov Subject: Textures in RADIANCE From: Jerrell Ballard Date: Mon, 30 Sep 91 15:11:14 EDT Hi Greg, Is there a way to use a RADIANCE data file in a function to produce a texture for a surface? If so, is there a example I can examine? Thank you. Jerrell Ballard Geographical Information Systems Team Waterways Experiment Station United States Army Corp Engineers ------------------------------------------------------------------------------ Waterways Experiment Station | Internet: ballard@mc1.wes.army.mil ATTN: Jerrell R. Ballard, EN-A | 3909 Halls Ferry Road | FAX: (601) 634-3726 Vicksburg, MS 39180 | Voice: (601) 634-2946 ------------------------------------------------------------------------------ Date: Thu, 3 Oct 91 09:45:44 +0100 From: greg (Greg Ward) To: ballard@mc1.wes.army.mil Subject: Re: Textures in RADIANCE Hi Jerrell, Do you mean texture as in surface normal perturbation, or are you talking about a pattern which affects the reflectance of a surface? In any case, I think the answer to your question is yes. A Radiance picture or data file can be used to define a pattern, and a Radiance data file can be used to define a surface normal perturbation function. Please give me a few more specifics about your problem and I will try to furnish you with an appropriate example. -Greg To: greg@hobbes.lbl.gov Subject: Re: Textures in RADIANCE Date: Thu, 03 Oct 91 10:23:54 EDT From: ballard@mc1.wes.army.mil Hi Greg, > Do you mean texture as in surface normal perturbation, or are you talking > about a pattern which affects the reflectance of a surface? My apologies for being vague. I am trying to create a texture for a polygon that is a surface normal perturbation. I have a large set of x,y,z data points for a surface. Using these data points I wanted to change a flat surface into one with "hills" and "valleys". I have approached the problem by splitting the surface into little triangles, with the vertices being defined by my data points, but I keep running out of memory in rendering. > Please give me a few more specifics about your problem and I will try to > furnish you with an appropriate example. Here is a test case I was trying to get to work: data file: ------------------ 2 0 100 3 0 100 4 1.00 1.00 10.00 10.0 1.00 10.00 30.00 10.0 1.00 1.00 10.00 10.0 surface file: ----------------- # some_texture plastic some_material 0 0 5 .2 .8 .2 0 0 # some_material polygon pertb_surface 0 0 12 0 0 0 100 0 0 100 100 0 0 100 0 # The example data file when interpolated will cover the same area as the defined polygon, so that a texture tiling is not necessary. The data file would be interpolated to create pertubations on the polygon surface. The data should make the surface appear to have a "data spike" close to the center. My purpose to this whole problem is to 1) be able to visualize three dimensional statistics and 2) overlay satellite imagery onto elevation data for a region. Once again thank you for your help. Jerrell Ballard Geographical Information Systems Team Waterways Experiment Station United States Army Corp Engineers ------------------------------------------------------------------------------ Waterways Experiment Station | Internet: ballard@mc1.wes.army.mil ATTN: Jerrell R. Ballard, EN-A | 3909 Halls Ferry Road | FAX: (601) 634-3726 Vicksburg, MS 39180 | Voice: (601) 634-2946 ------------------------------------------------------------------------------ Date: Fri, 4 Oct 91 08:35:06 +0100 From: greg (Greg Ward) To: ballard@mc1.wes.army.mil Subject: Re: Textures in RADIANCE Hi Jerrell, The problem with surface height data is that it doesn't really tell you about the surface orientation. Even if you converted this information to surface orientations, you would not generate shadows or contours and you would still use quite a lot of memory. Have you heard of the RayShade package written by Craig Kolb at Yale University? I think it contains code specifically for rendering large height fields for landscapes. You might want to investigate that free package as a more practical alternative for your application. Radiance was really designed more with architectural and lighting design applications in mind. If you have tried RayShade already unsuccessfully or have some other compelling reason to stick with Radiance for your purpose, I will try a little harder to think of some way to make it work. -Greg P.S. RayShade is available via anonymous ftp from weedeater.math.yale.edu (130.132.23.17) Date: Sat, 12 Oct 91 19:52:35 PDT From: chas@hobbes.lbl.gov (Charles Ehrlich) To: greg@hobbes.lbl.gov Subject: Source datafile questions Greg, I'm in the process of creating fixture descriptions using candlepower distribution on paper media (no IES magnetic media available.) I need to know under which circumstances does one use the various functions in source.cal that have to do with illumination output. For example, if my source object (type illum) is a sphere, do I need to use the flatcorr or the corr functions (or perhaps none at all)? I've figured out that phi2 is bilaterally symmetrical and that phi4 is quadrlateral symmetrial output, but what is "type B photometry?" I suppose that with a formal lighting design background I might know these answers, but if this question is quickly answerable, that would be great, but a reference to a good book would be fine too. Secondly, I'm concerned about the fixture looking as realistic as possible, so I am spending a good deal of time modelling the geometry of the fixtures. Undoubtable I run into situations in which I have to make the illum sphere larger than the actual fixture. If the fixture is a pendant whose overall dimension is less than its distance from the ceiling, everything is fine. But if the fixture is wall mounted or a pendant close to the ceiling, there is the issue of illuminating those surface that lie inside the envelope of the fixture-enclosing illum sphere. My solution has been to put a small sphere entirely inside the fixture that "glows" with the intensity of the fixture itself. It would be easier to simply give those surfaces of the fixture that appear bright the glow material directly, but since the "back" faces of the polygons face the non-illuminated surfaces of the wall/ceiling, something inside the fixture is still needed. If I did apply the glow material to the individual surfaces that make up the fixture itself (several hundred) what is going to happen to the distribution data? Does it need to be altered to account for the fact that there are so many of these individual surfaces? For the case of the glowing sphere inside the fixture, what is the best way to describe its output. Should I use the same output distribution pattern of the larger illum with a proper scaling factor for the smaller size of the sphere? Or should I use a simple glow (no distribution) with the correct radiance value as calculated from the intensity of the lamps. My reasong for wanting to do it the second way would be to minimize calculation times but my concern is that the distribution along the nearby surfaces would not be accurate. This glow type will have a radius of effect just larger than the distance to the furthest intersection of the illum sphere with the nearby surface. In the areas where this radius of effect is in fact greater than the bounds of the illum sphere, does that area become twice as bright as the fixture output distribution? My guess is that it does, and this is somewhat of a problem, except that for the most part, this whole area is going to be much brighter than the surrounding image and most likely not visible. But, a better solution might be to give the illum material the ability to be opaque to source rays looking for particular light source material names/types just as the secondary source type mirror does. One other minor question. From where does the radius of effect for the glow material take effect? At the surface of the sphere? Or perhaps more easily understood would be from the center of the sphere? Well, there's a ethernet-packet-full to think about. In regards to the work for my lighting designer client...I was 15 minutes late getting the images to her because the Kodak printer wouldn't print the last one...there seems to have been a problem in transfering it back from the macintosh where I edited it with Adobe Photoshop. Thanks for the help. This might be a good one for general distribution. Chas Date: Mon, 14 Oct 91 10:53:24 +0100 From: greg (Greg Ward) To: chas@hobbes.lbl.gov Subject: Re: Source datafile questions Hi Chas, Gee, so many questions. I doubt this will be of general interest, since most folks will never have to get into the nitty-gritty of light source modeling (I hope!), but I will put it in the next digest just in case. Type B photometry is a different measurement scheme where a plane of evenly distributed photometers is used to measure the beam candlepower of a spotlight. This measurement technique is used most frequently on car headlamps, although you might find some floodlights measured this way as well. For most interior fixtures, type A or type B photometry is used, and those differ only in the definition of angles. A mediocre reference on the subject is the IES Lighting Handbook, Reference Volume. (That is the only one I know of.) I think the only way to define your light source correctly is to use illum surfaces with the proper distribution, and have those surfaces enclose the actual geometric description for the fixture. If you use a single sphere to enclose the fixture, then you should NOT use the flatcorr function defined in source.cal. Use the corr function if you would like another place to insert a multiplier (A1), but I use the predefined noop function ordinarily. If you enclose the fixture with polygons, then DO use the flatcorr function. You may use the same material to modify all polygons, applying the lamp distribution to this material. In any case, you must use the recipricol of the projected emitting area in square meters as you have defined it with your illum surfaces. This determines the total light output of the combined fixture. As for the illumination of the fixture and wall/ceiling surfaces, you should get adequate results if you are careful in assigning glow surfaces and your enclosing illum geometry. Make every attempt to enclose the fixture as tightly as possible so that surfaces above or behind the fixture are illuminated. If a sphere would intersect a neighboring surface, use a box instead. This will only increase computation time slightly. Under no circumstances should you use a hundred light source polygons to describe your fixture. Although it might work (and you could use the same distribution function for each), the cost would be enormous. Use glow materials to modify your fixture geometry, with zero as the radius of influence. (The radius is measured from the center of a sphere by the way.) If your light fixture is designed to be flush mounted, you may find it necessary to space it a short distance from the wall or ceiling in order to squeeze your illum surface between. This is still preferable to putting an emitting glow surface inside the light source, I think. I hope that this answers most of your questions. Getting detailed models of light fixtures is a real challenge! -Greg From: Krister Lagerstr|m Subject: Re: Radiance mailing list To: greg@lesosun1.epfl.ch Date: Thu, 24 Oct 91 15:03:51 GMT-1:00 > > Would you like for your name to be included on our mediated mailing list > for Radiance users? To people on this list, I mail periodic summaries of > e-mail discussions with users as well as update announcements. > > -Greg > Yes, I'm interested in getting the mailing list. I haven't really used the package much yet, but it seems like one of the best public ray- tracers around. Another thing I'm interested in is some sort of CAD program that can use radiance's features and produce .rad-files. Perhaps something like the 'preview' program, but more interactive and user-friendly with the option to add objects, change colors and textures, and change views run-time. If you know of such a program, please let me know... / Krister Lagerstrom Date: Thu, 24 Oct 91 16:36:14 +0100 From: greg (Greg Ward) To: ksla@me.chalmers.se Subject: Re: Radiance mailing list Hi Krister, Gee, I sure wish I did know of such a program! There is an editor for the MacIntosh written by Paul Bourke of New Zealand and available on hobbes in the pub/mac directory that will edit and write out polygonal descriptions in Radiance format. I know of no program tailored to Radiance's particular talents, but you can use AutoCAD to produce a DXF file then use either the AutoLISP converter written by Robert Amor or the C dxfcvt program written by Ning Zhang to get a Radiance file minus the material descriptions. Both of these programs are included in the standard distribution. Jennifer Schuman has written a HyperCard-based interface to the arch2rad program for assigning and even defining materials to go with an Architrion description. This is probably the most sophisticated translator we have, but it only works under A/UX on the MacIntosh at the moment. Next year I plan to do more work in the user interface area, but primarily for running the simulation and not so much for modeling. It's just too difficult to work on modeling for me... -Greg From: malle@rpksun1.mach.uni-karlsruhe.de (Bernhard Malle) Subject: Architectural buildings To: greg@lesosun1.epfl.ch Date: Fri, 29 Nov 91 9:20:07 MET Hello Greg, I have built a house, with all the windows, doors and the garden (with the help of our modeller). I would like to know, how did you specify the "environment" in the example picture that comes with the radiance-package, i.e. how did you define the sky? Is it a great sphere, whith the house right in the middle? ( I know from the testroom, how to define a window with the skyfunc ). Which material did you attach to the walls? I think there is no stone- material in radiance, what shoud I use instead (plastic or metal) Concerning the acis-modeller and the radiance package, I didn't have the time to take a closer look imot the code to see whether and how I could integrate the modeller-routines. But I hope I can start with it before christmas. Thanks for your help. Bernhard Date: Fri, 29 Nov 91 09:35:38 +0100 From: greg (Greg Ward) To: malle@rpksun1.mach.uni-karlsruhe.de Subject: Re: Architectural buildings Hello Bernhard, The description of the exterior is accomplished with glowing sources at an infinite distance, as shown in the example.rad file in the tutorial. You should not use a sphere, as it is a finite object. However, you may want to put down a ground plane, to make the outdoor shadows appear correct. I usually create a large polygon or disk with its center under the house and extending to some reasonable distance on all sides, say 5 times the size of your structure. Unfortunately, I do not have a nice stone pattern, but if you have a picture you can digitize it and make it into a Radiance pattern with a little effort. The following will give you a rather featureless concrete: void brightfunc dirty 2 dirt dirt.cal 0 1 .3 dirty plastic concrete 0 0 5 .3 .3 .3 0 0 The dirt function gives at least a little variation on the surface appearance so it doesn't just look flat. -Greg From: malle@rpksun1.mach.uni-karlsruhe.de (Bernhard Malle) Subject: Re: Architectural buildings To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Mon, 9 Dec 91 20:06:58 MET Hello Greg, thanks for your answer and the hints. The example, that I mailed to you was just a very very simple small example. Normally the cone is replaced by a large houese with several rooms, windows stairs, the thin block is replaced by a garden shaped after a real existing landscape ( actually the house of my parents). So I do have the need to simulate daylight. I hope that I will have finished this example unitl chritsmas, as I hoped to present a real foto-realistic image as a present to my parents (apart from implementing the possibility to specify material conditions in our cad-system.) So I wish you a wonderful christmas. Bernhard ps I have succeded in unpacking and unstuffing most of the documentation of mac.sit.hqx. the only thing that is missing seems to be the flow of data ( a mac-draw-document) Date: Mon, 9 Dec 91 11:11:18 PST From: greg (Gregory J. Ward) To: malle@rpksun1.mach.uni-karlsruhe.de Subject: Re: Architectural buildings If you are serious about daylight, I recommend going through the tutorial (ray/doc/tutorial.1) and following those examples to learn how to do it right. ==================================================================== IMAGES Image Translators etc. Date: Sat, 12 Oct 91 11:23:40 NDT From: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: rpict and image size To: GJWard@Csa2.lbl.gov Scream...great gnashing of teeth...Am I correct in assuming that at the moment RPICT only generates square images? I get the a 256x256 image whether or not I do -x 256 -y 256 -x 512 -y 256 -x 256 -y 512 I want to generate a 324x244 quicktime animation, oh well I'll find another way of doing it. ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) Date: Sat, 12 Oct 91 20:03:04 NDT From: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: QuickTime movie To: GJWard@Csa2.lbl.gov You may know that QuickTime has been shipped to developers (beta release anyway!) I have been writing some QuickTime stuff over the last few days and have deposited(*) our first (and possibly the world first) QuickTime "movie" for which the frames were generated using Radiance. The scene was generated in a bit of a rush (don't know why there isn't a mirror above the sofa, we certainly think there should be one there) and the frames were put into a QT movie using very crude software of our own...but it kinda works. I'm doing a visualisation (flyaround) of a Steiner surface next for the Maths department here. (*) it's been deposited in the Mac directory or course. ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) Date: Mon, 14 Oct 91 09:04:50 +0100 From: greg (Greg Ward) To: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: Re: QuickTime movie Hi Paul, I just read your blurb about QuickTime, and unfortunately we don't have new enough systems on our Mac's to run it right now. (Drat!) Rpict by default adjusts the size of the image to guarantee square PIXELS, not square images. If your pixels are not square, you may enter their aspect ratio (height/width) with the -p option, or if you specify -p 0, rpict will use the explicit x and y dimensions you give it. Normally, rpict uses the x and y dimensions as a maximum rectangle in which to put a picture whose pixels have the given aspect ratio. If you are doing walk-through (or fly-through) animations, you really should avail yourself of the -z option of rpict and the pinterp picture interpolation program. This is what I use for all my animation, and it has the potential to smooth animations at a very reasonable cost. However, I'm not sure it's worth it at 9 frames/second. It depends on what kind of delta you have between images. I can send you a shar file example if you like. By the way, someone I know (Charles Ehrlich) has been using Russell's Super3D translator to great effect. -Greg P.S. Sorry things have been slow getting the 3d-editor and converters online. We're waiting for some cooperation from the folks back home... From: nfotis%theseas.ntua.gr@Csa2.lbl.gov (Nikolaos) Subject: About the Sun RasterFiles problem To: gjward@Csa2.lbl.gov (Greg Ward) Date: Mon, 14 Oct 91 17:13:45 EET Subject: Here's the solution with SunRast files Dear Mr. Ward, Remember when I talked about the strange behaviour of PBM+ tools with Sun 24-bit rasters produced from Radiance? Well, it seems that here's the solution: -- From the USENET comp.graphics group: Sven-Ove Westberg writes of problems with sunraster 24-bit format: it's not clear whether the order of values in a pixel is R,G,B or B,G,R. Graeme Gill says: > From my experience in getting the Portable Bit Map (pbm) utilities >and xloadimage to agree on sun raster files, I came to the conclusion that >both were broken in coping with RGB and 32 bit format files. It seems probable >that other programs are also broken. I ran into this quite a while ago, and eventually got a definitive answer by asking on comp.sys.sun, or someplace like that. It seems that BOTH orders are right --- Sun changed their mind at some point! I've already submitted a bug report to Poskanzer for PBMPLUS; it doesn't look like he's done anything about it in the latest release. >From my archives: To: Jef Poskanzer Subject: Sun rasterfiles again Date: Thu, 21 Mar 91 15:57:23 EST Message-ID: <7473.669589043@G.GP.CS.CMU.EDU> >From: Tom.Lane@G.GP.CS.CMU.EDU This little tidbit indicates that you had better support *both* color orderings in 24-bit Sun rasterfiles. Don't know if you were aware of that. tom ------- Forwarded Message >From: Bob Myers Date: Thu, 21 Mar 1991 09:31:08 PST Organization: Unocal Science and Technology Division To: tgl@CS.CMU.EDU Subject: Re: Color assignment in Sun rasterfiles >From the man pages for SunOS4.1.1: NAME redxblue - swap red and blue for a 24 or 32 bit rasterfile. SYNOPSIS redxblue [-v] [-q] [inrasf|-] [outrasf] DESCRIPTION redxblue converts an old-style 24 or 32 bit rasterfile into the newer, Sun-standard format. The old format had the byte ordering RGB for 24-bit rasterfiles and XRGB for 32-bit rasterfiles. The new format has BGR for 24-bit rasterfiles and XBGR for 32-bit rasterfiles. The conversion is performed simply by swapping the red and blue bytes. The primary use of this utility is to prepare rasterfiles in the old format for dithering with 24to8 or viewing with the NeWS 'readcanvas' operator. It is also possible to use this filter for converting from a new style format into the old format. OPTIONS -v Verbose mode will print information as it processes the image. (The default is to be silent.) -q Query (prints list of options) SEE ALSO 24to8(1) - -- Bob Myers [714] 528-7201 x2339 Unocal Science & Technology Division stssram@unocal.com Brea, California myers%unocal.uucp@sunkist.west.sun.com ------- End of Forwarded Message So there you have it: you may need to support both orders depending on the age of the software and/or image files you have. Yech. -- tom lane Internet: tgl@cs.cmu.edu BITNET: tgl%cs.cmu.edu@cmuccvma --- End of Usenet message. I think that it should be included in the next version of Radiance Docs, or in the next digest. Me? We've got back the H-P, but now the disk space is absent... :-( (I HATE multiple architectures, binaries and administration headaches!) Greetings, Nick. -- Nikolaos Fotis National Technical Univ. of Athens, Greece 16 Esperidon St., UUCP: mcsun!ariadne!theseas!nfotis Halandri, GR - 152 32 or InterNet : nfotis@theseas.ntua.gr Athens, GREECE FAX: (+30 1) 77 84 578 Date: Mon, 14 Oct 91 17:12:08 +0100 From: greg (Greg Ward) To: nfotis%theseas.ntua.gr@Csa2.lbl.gov Subject: Re: About the Sun RasterFiles problem Thanks for the information about Sun rasterfiles. Ra_pr24 does support both formats on input, but only produces BGR ordering (the older format) on output. Perhaps you are right, and I should provide an option to produce the RGB ordering, but this comes with a different value for the image type and some programs will still bomb. Basically, there are programs out there that you MUST provide with a bogus rasterfile in order for them to function. This I don't feel the need to support. Anyway, here is a new version of ra_pr24.c with a -rgb option for you: [included in release 2.0] Date: Sat, 19 Oct 91 17:41:41 NDT From: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: rad2pict To: GJWard@Csa2.lbl.gov I don't know if you have been informed but we have got a radiance to PICT converter going. It takes a Radiance image file and creates a 24bit PICT. Yesterday I wrote a flight path generator. It takes a file of key frames vp, vd, vu vectors and the number of tweens and any other rpict parameters. It generates a whole stack of rpict calls with the inbetween vp, vd, vu, the other rpict parameters are just replicated. Currently supports linear interpolation only, plan to do spline interpolation today. This is all for a really nice animation that is a flight over a landscape for the terrestial botanists here. ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) Date: Wed, 23 Oct 91 10:01:25 NDT From: pdbourke%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: ra2pict To: GJWard@Csa2.lbl.gov I have deposited the Radiance to PICT converter in the TRANSLATORS directory. It also includes a Radiance to RGB RAW converter. ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) From greg Wed Oct 23 09:44:06 1991 Date: Wed, 23 Oct 91 09:44:05 +0100 From: greg (Greg Ward) To: pdbourke%ccu1.aukuni.ac.nz@Lbl.Bitnet Subject: Re: rad2pict Status: RO Thanks very much for your ra2pict program. With your permission, I would like to rename it ra_pict and include it with the standard distribution. I agree that ra2pict is a little nicer, but the convention I started with is to us this2that for CAD translators and this_that for image format translators. Also, since most of the image translators I've written support translation both ways, the underscore seems a little more appropriate since it is a little less directional. By the way, I am curious why you found the need for the ra2raw program when pvalue can produce the same output with the -i and -h options? I have written a rather involved script for walk-through animation using pinterp for inbetweening and rcalc to compute Catmull-Rolm spline interpolated camera positions. It is not with me at the moment, but I will bring it in tomorrow from home and mail you a shar file. I have wanted to write a general animation controller for some time, but I do animations so infrequently that I don't really have it down well enough to warrent going away from a script. I was hoping that you might use some of what you find in the script to enhance the controller you're developing. -Greg Date: Thu, 24 Oct 91 7:55:45 NDT From: pdbourke%ccu1.aukuni.ac.nz@Lbl.Bitnet Subject: animation To: greg%lesosun1.epfl.ch@Lbl.Bitnet > By the way, I am curious why you found the need for the ra2raw program > when pvalue can produce the same output with the -i and -h options? I didn't know about these options for pvalue, but the real reason was to make sure we had something that correctly read Radiance image files. > I have written a rather involved script for walk-through animation > using pinterp for inbetweening and rcalc to compute Catmull-Rolm > spline interpolated camera positions. It is not with me at the > moment, but I will bring it in tomorrow from home and mail you > a shar file. I have wanted to write a general animation controller > for some time, but I do animations so infrequently that I don't really > have it down well enough to warrent going away from a script. I would like the maths for Catmull-Rolm spline... > I was hoping that you might use some of what you find in the script to > enhance the controller you're developing. I don't remember exactly how much of my flight path generator I described last time but here goes (possibly again) Usage: flightpath keyframefile interpolation [rpict options] octfile where the key frame file contains one line per key frame with nine numbers (3 vectors) naemly vp vd and vu. The interpolation at the moment is either 'l' or 'b' for linear or bezier. Might do some spline today, at least something that actually passes through the key frame points whereas bezier is inside the complex hull, more annoying than I thought. The rpict options just get copied into a file (name is hardwired at the moment) which contains a list of rpict calls. Oh yes I almost forgot, the first line of the keyframefile contains the number of inbetweens for each keyframe. Since I've written this for my needs at the moment, the ra2pict (ra_pict) calls are also placed after the rpict calls. We transfer these frames to the Mac as PICT files and convert them into either a QuickTime animation or merge them into MacroMind director. I intend to write a PICS converter maybe although it's not to high a priority. Anyway I had a animation generating last night, 100 frames of that terrain model I was talking about last time with 45000 polygons. Preliminary work looks real good and I am seeing the people I'm doing it for today so I had better sign off and see how it went. ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) Date: Thu, 24 Oct 91 14:24:16 +0100 From: greg (Greg Ward) To: pdbourke%ccu1.aukuni.ac.nz@Lbl.Bitnet Subject: Re: animation Hi Paul, At the end of this message is a shar file containing the script from my latest animation venture. Note that the keyframes it takes are in a .cal file called keys.cal rather than a view file. I wrote the view command in rview so that multiple views can be written to the same file, and this is how I selected the key frames. The view command also takes any number of additional arguments after the view file name, and these are appended to the view specification which is itself appended (as I said) to the file. I use this feature to add a value for the time between the last frame and this one, usually in seconds. I have found this to be the most intuitive way for me to control the spacing of keyframes. Easier than thinking about the number of frames inbetween, since I don't know for sure what framing rate I might use later on. Unfortunately, I do not currently have a method from going from the keyframe view file created with rview and the keys.cal file used by rcalc to generate the view parameters for rpict. Anyway, take a look at it. The formulas for Catmull-Rolm interpolation are in the file spline.cal, if you can read it! Take note of how pinterp is used in the script to generate 7 interpolated frames for every frame rendered directly by rpict. -Greg #! /bin/sh # This is a shell archive, meaning: # 1. Remove everything above the #! /bin/sh line. # 2. Save the resulting text in a file. # 3. Execute the file with /bin/sh (not csh) to create: # script # keys.cal # view.fmt # spline.cal # This archive created: Thu Oct 24 13:56:29 1991 [The rest was deleted because it can be found in the ray/obj/cabin/anim1 directory of the standard 2.0 distribution.] Date: Mon, 18 Nov 91 13:49:27 NDT From: russells%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: Image translators To: GJWard@Csa2.lbl.gov Hi Greg, I have made a few improvments to ra2pict -- removing potentially nasty bugs, writing a man page, that sort of thing. If there is any interest I will the new version sent over. There is one remaining problem, however. Most of Radiance is byte-order independent, right? The files produced on different types of machines may not be interchangable, but the code works on any type. Unfortunately, PICT files expect their word and longs to be in the big-endian order. I am trying to find methods to tell the machine type from header files and/or libraries, but with little success. As far as I can tell only things like ra_t8 (Targa format) depend on byte orders. Does this program work on a little endian machine? Also, is there any call for Radiance to GIF. I have the 87a and 89a formats, and code for decoding 87a gifs (both 8 bit formats, as far as I can work out). While I am at it, how about Radiance to rle (Utah Raster Toolkit RLE)? ... to ppm? (Portable Pix maps) ... to tiff? ... X Windows bitmaps? ... SG gl files? ... IRIS images? ... jpeg? (Just while I am in the mood.) I have some of these libraries available. In most cases if you can turn the image into a stream of raw bytes there are routines to translate these into the other formats. Bye ------------------------------------------------------------- Russell Street russells@ccu1.aukuni.ac.nz Auckland University, New Zealand "Baldrick, I believe the phrase rhymes with 'clucking bell'." -- Edmund Blackadder Date: Mon, 18 Nov 91 11:16:38 +0100 From: greg (Greg Ward) To: russells%ccu1.aukuni.ac.nz@Csa2.lbl.gov Subject: Re: Image translators Status: RO Hi Russell, Thank you for your letter and for all your work on Radiance translators! Of course, I am very interested in getting your latest version and man page for ra2pict. I am presently putting together release 2.0 of the software, and would like to include ra2pict within the main distribution (with your permission) because it is such a useful program. I have been using the older version myself without problems so far, but I am glad that you are a perfectionist in removing potential bugs. Did you make the compiler compatibility changes I suggested to your version? Also, there have been some minor changes to the calls to open a Radiance picture since you wrote your original version. At the end of this letter I have included a skeletal translator using the new calls. Byte order is indeed a problem for some of the so-called "standard" image formats. I have made all Radiance files byte-order independent, including the picture files, but the Sun rasterfile format used by ra_pr and ra_pr24 does depend on byte ordering. Fortunately, the Targa file format specifies byte ordering and is thus not dependent on machine differences in this regard. If the PICT format specifies ordering, then we must follow. It is not necessary to find out the byte ordering of the host machine, simply use getc() and putc() to do all your input and output and pack/unpack the words yourself as I do in ra_t8.c. I have indeed had requests for Radiance conversion to GIF format, but have not done anything about it myself. GIF seems to me to be a rather nasty format and it varies between the PC and the Macintosh. Also, since it is limited to 8 bits, I have decided to skip it. I have written translators between Radiance pictures and ppm as well as tiff. For the latter, I used the excellent public domain library written by Sam Leffler. I figure that you can go from these formats to many other formats by picking up the Utah Raster Toolkit and the pbmplus package. If you are really interested in writing direct translators, though, I think having one to GIF would be nice. Thanks again! -Greg [File deleted because it is include in 2.0 distribution under ray/src/px/ra_skel.c] =================================================================== GENERATORS New object generators Date: Wed, 16 Oct 91 8:43:48 NDT From: pdbourke%ccu1.aukuni.ac.nz@Lbl.Bitnet To: greg%lesosun1.epfl.ch@Lbl.Bitnet > The generators sound nice. I suppose they're all MacIntosh applications. No, actually I've started doing some programming under UNIX and I thought these would be a nice way to learn. They are modelled after genbox, xform etc. Also some of the things we've been doing require some higher level generators, for example, I will probably write a stairwell generator for someone here...floor height, number of landings, step size, width...etc Someone else also wants a column generator...? ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) ======================================================================= LUMFACTOR Luminous Efficacy changed from 470 to 179 Date: Thu, 24 Oct 91 14:40:06 +0100 From: greg (Greg Ward) To: chas@hobbes.lbl.gov Subject: BUG!!!! Hi Chas, I just discovered to my dismay that I have been using the wrong value for the conversion from lumens to watts. Remember, this was the subject of a recent mail exchange regarding the conversion for luminaires. Anyway, 470 is not the correct value, and neither is 683. The correct value is (may I have the envelope, please): 179 lumens/watt. This is the luminous efficacy (as it's called) of white light over the visible spectrum. I don't know what I did before to get 470, but I obviously did it very badly. The good news is that this does not invalidate all of the previous work we have done or make the luminances we quoted to people less accurate. Fortunately, whatever value was used before was used twice, once when converting luminaire values to radiance, and again when converting the computed radiance values to luminances. Thus, the two mistakes cancelled each other. This is a good part of why I never knew the value was so far off. The bad news is that the next time you compile the Radiance sources, and in the official release of 2.0, the values in all the luminaire data files and all the gensky output will suddenly be wrong by a factor of roughly 2.6! Also, using the newer version of ximage on an older file will give the wrong luminance values with the 'l' command. This is a major pain and it is really causing me great grief and remorse. It would almost be better to leave everything alone and let the wrong value stand. Progress, who needs it? Hobbes now has the updated versions of the source, but I have not compiled it there nor have I given you the corrected binaries for the Mac. I thought you could wait until a quiet point to tell me when you wanted them, when you have time to make the correction to your luminaire data files. Just divide all the values by 2.63 -- in vi you can go to the first line of data in each file and execute: :.,$!rcalc -e '$1=$1/2.63' -Greg [This change has been incorporated in version 2.0. See the note in the file ray/doc/notes/ReleaseNotes for more information.] Date: Thu, 24 Oct 91 07:08:33 PDT From: chas@hobbes.lbl.gov (Charles Ehrlich) To: greg@lesosun1.epfl.ch Subject: Re: BUG!!!! Greg, It just goes to show that we are all human and are all allowed to make mistakes. Congratulations! Just a few clarifications. When saying that previous work from gensky and such will be off by a factor of 2.63, does that mean when re-calculated, or only in the stored image files? Could this bug cause my sources appear too bright when using consistant older binaries? The luminarire data files in question are the *.dat files and not any of the .rad files that might contain brightfunc or illum definitions with correction factors? I haven't yet downloaded the Mac binaries. I think this latest event is cause for me to get them. Could you please update them and I will convert my luminaire data files pronto. This all reminds me of another suggestion that I've had a hard time justifying but always thought would be great to have, namely, a VERSION entry in output files. That way ximage would know when an image was calculated with the older values. Another concern with regard to header entries...I wish there was a more exact way of quickly knowing the resolution of an image. With the PIXASPECT and "maxima" definitions for the -x and -y options, one can not tell what the resolution of the store image is. A RESOLUTION entry would solve that. The world will not focus on this one mistake and judge you and your work by it! I still really believe in the work we're doing! Chas Date: Thu, 24 Oct 91 16:01:09 +0100 From: greg (Greg Ward) To: chas@hobbes.lbl.gov Subject: Re: BUG!!!! Hi Chas, Do you ever sleep? The problem with gensky is two-fold. First, if you have files that were GENERATED by the old version of gensky, the radiance files therein will be in error, but consistent with the errors in the image display and analysis programs that give luminance. Second, pictures produced using these files by either the old software or the new software will be wrong in terms of absolute radiance. Scene files that merely contain a call to gensky (using an exclamation point) will give the correct results with the new software. Perhaps gensky is not so much of a concern. Given the enormous variability of daylight conditions, a factor of 2 or 3 is not so huge. Much more concerning is what to do about pictures generated using luminaire data and older versions of ies2rad (or handmade files). The pictures and data files will both be wrong when handed to the newer programs. The only fix I can think of for the picture files is to manually edit the EXPOSURE value in the header with the following: % echo EXPOSURE=.381 > newpicture % cat picture >> newpicture The reason this works is because ximage applies the inverse of the exposure values stored in the file to get back the original absolute radiances before doing any conversion to luminance. Thus, by claiming that we changed the values in the file by a factor of .381 when we really didn't, the new ximage will end up using corrected original values. This also works for the -o options of pvalue, pcomb, etc. It doesn't matter where the correction appears in the header -- they are all multiplied together. I've added the following alias to my .cshrc file for convenience: alias pfixabs '( echo EXPOSURE=.381 ; cat \!:1 ) > \!:1.$$ ; mv \!:1.$$ \!:1' Note that this alias overwrites the existing file, so you may want to take off the mv command at the end if you don't want this to happen. To answer your other question, light sources should not "appear" too bright using the older versions. The absolute values in fact do not affect appearance at all. Only using the 'l' command in ximage or some other means to get at the actual numbers can you ever know the difference. Again, using the old version of ximage with the old version of ies2rad or gensky will produce the same results as the new with the new. It is only in mixing the new and the old that the results will be screwed up. There is now a -version option to the renderers that allows you to know exactly what you're working with. This same version is also entered into the output picture in a SOFTWARE= line. Check it out! The -d option of getinfo will tell you the exact resolution of an image, as well as the bounding cube of an octree. I'm surprised you didn't know about it, but Radiance by this time has so many nooks and crannies I guess no one knows it all (including myself sometimes). -Greg ================================================================= MKILLUM New program to compute light distributions Date: Wed, 30 Oct 91 02:21:15 PST From: chas@hobbes.lbl.gov (Charles Ehrlich) To: greg@hobbes.lbl.gov Subject: mkillum Greg, I'm here with Deanan looking at the manual pages for mkillum. It says that there is no default data file name. Why did you choose to do it this way. It seems to make a lot of sense to have the option to allow mkillum to automatically create data and dist files based on the name of the surface modifiers. I understand that the best way of getting accurate results it to tweak the parameters for each surface, but for a good-enough first-run, this seems incredibly time consuming. Deanan says that mkillum is a great idea and I fully agree. I'm looking into this with Deanan at this time in preparation for the EEC project which will involve a lot of daylighting. Deanan is currently working on a fantastically detailed atrium space and is discouraged with the time it is taking to do a regular ambient calculation. What we were wondering is if it is reasonable to define a small number of somewhat large illum surfaces in our input scene file that we want to use as the eventual illum secondary sources of our mkillum processed scene file. What I am anticipating as being a problem is what to define these temporary illum surfaces as if we have not yet done the mkillum process. A catch-22. In other words, we don't want to process the thousands of surfaces that comprise the surrounding walls of our atrium. We instead want to define dummy walls made up of some invisible material that mkillum will not try to interpret during its pre-process as light sources. Is this case handled by the illum type already? or do we need another "invisible" surface material type to deal with this scenario. Just to verify our understanding of how mkillum works...it creates a complete copy of the input scene file, right? Does this new scene file then have to be re-oconv'd? What if the "main" scene file is our "invisible" illum patches and then a bunch of instances and \!xforms of other parts of the scene. We've already decided that we can just turn off mkillum just before all of the \!xforms, but is kmillum going to expand each of these such that the new scene file will have to re-oconv'ed (and all of its gory detail). The scene as it is now can't even be oconv'd unless it is broken up and instanced, then put together. Comprede? If we have xform without the -e flag set, will mkillum not expand these? Well, I guess you weren't expecting this kind of feedback so soon, huh? Chas Deanan Date: Wed, 30 Oct 91 12:05:59 +0100 From: greg (Greg Ward) To: chas@hobbes.lbl.gov Subject: Re: mkillum Hi Chas, I apologize for the wording in the mkillum manual page. Thanks for reminding me to fix it. (I had read it and been confused before myself!) In fact, the data file is set automatically based on the material name as described for the m variable above. The only reason the f option is there is so you can override the default naming. Also, when the m option is used to name both the material name and the data file, the number of data files will grow with each rerun of mkillum, not deleting the old files. I have this as a protection against overwriting files in the directory that happen to collide with the name chosen automatically by mkillum. Using the f option is the best way to insure the names you want without gathering a whole lot of extra files on your system. Also, the names are incremented so that you only need give this option once (at the beginning of the file) and then you can forget it. Good question on the invisible surfaces. I guess I would recommend using a trans surface with a transmission of 1, color of 1 1 1, and transmitted specularity of 1 with roughness 0. Such a surface would be completely invisible in Radiance. Face the surface normals away from the walls so mkillum will create the sources you want. Also, use the b option of mkillum to prevent the creation of light sources that have little or no output. If your atrium walls are diffuse, you can set the d option to 0 so it will save time and create diffuse sources. Finally, you should create enough of these initial surfaces (using gensurf if you like) so that you don't have one big source on each interior facade that results in an inaccurate distribution. I wouldn't consider using fewer than 16 sources per wall for a roughly cubical atrium space. You can switch mkillum "off" as you say, but it still expands all inline commands and requires rerunning oconv on the output. The way I meant it to be used was on a separate file containing the surfaces you want changed into light sources using an octree with the basic description. If you use transparent surfaces, you don't even have to include these in the original octree. You can then add the output of mkillum to the octree incrementally, thus avoiding a complete rebuild. For surfaces that do participate in the calculation (as most do), you can have two incremental branches of the base octree, one without illum sources and one with. This also permits easy comparison of results and runtimes for the two approaches. I will send you a shar file with the test scene I've been using. -Greg P.S. I'm glad you're interested in mkillum. I think it's pretty nifty, if I do say so myself. Date: Wed, 30 Oct 91 23:54:42 PST From: chas@hobbes.lbl.gov (Charles Ehrlich) To: greg@hobbes.lbl.gov Subject: mkillum ...continued Thanks for the shar of the test mkillum scene file. Going back to the atrium problem. The trans type definition is great and just what the architect ordered. Another problem I anticipate has to do with the fact that these trans/illum surfaces will exist some distance from the face of the atrium walls. The atrium walls are also varriegated, with balconies and catwalks connecting opposite sides. The problem of not being able to shape the illum to the exact contours of the source fixture (walls or pendants) comes up again. It doesn't seem like defining a trans/illum box, or even a two-sided trans/illum will eliminate the inevitable black (or default ambient value) zones between where the illum intersects the catwalk and where the wall of the atrium and catwalk actually intersect WITHOUT also making the illumination of walls themselves inacurate. If there was a source material type (as described in previous mail) that had the ability to NOT cast shadows (which also implies always being visible) within a certain radius of effect, our trans/illum surfaces could reside well inside the atrium walls thereby avoiding the illum/surface intersection problems. Another solution might be to make a source material type that could be selectively visible to named materials. Deanen noticed a nice feature of mkillum. If the temporary trans surface definitions (and also any #@mkillum declarations) are kept to the end of the scene file, mkillum does not expand the \!xform parts of the scene (whether or not the -e flag of xform is set.) This seems like a very nice, and very possibly unintentional feature, no? I haven't actually tested this myself, but even if it did expand the xform parts of the scene, one could always delete the unwanted parts of the new scene file, then incorporate the new mkillum created parts into the original file with xform's intact then re-oconv the scene. No problem, eh? Keep in touch, Chas Date: Thu, 31 Oct 91 13:35:52 +0100 From: greg (Greg Ward) To: chas@hobbes.lbl.gov Subject: Re: mkillum ...continued Hi Chas, In fact, I thought a little more about what I said about using a trans type and I realized that it was totally unnecessary. You can just define surfaces with the modifier "void" that you never create an octree for at all. This will be soley for input to mkillum, and it will create illum surfaces in their place with no alternate type. This is in fact much preferred to the stupid solution I suggested, and what I had in mind when I wrote the program. I just forgot! I think that the best solution is really the one Deanan suggests of giving the actual wall surfaces to mkillum as the ones to modify. This will avoid the intersection problems that you anticipate (and rightly so) from using mkillum with a free floating surface. As long as the wall surfaces are not in the hundreds and teeny-tiny or just one huge surface, things should work out. I just tested the expansion of files by mkillum and it does it unconditionally as I thought. I made it this way so that illum objects and commands might appear in subfiles, but maybe this isn't optimal for large files. I really didn't write mkillum with an ENTIRE scene description in mind. It's much better if you can separate the relevant pieces into a separate file. This might mean two runs of arch2rad with two different mapping files for example. -Greg ================================================================ AUTOCAD Carnegie Mellon working on DXF translator Date: Fri, 1 Nov 91 11:56:47 EST From: vanwyk@arc.cmu.edu (Skip Van Wyk) To: greg@lesosun1.epfl.ch Subject: xRAD Greg, I have just spent about 2.5-3 weeks on a new AutoCAD to Radiance translater. It does not alter the drawing file, and does more than the one from down under. I'll upload it in about a week, if all goes well. I have the following questions, however. Can you *not* distribute my stupidity to the globillum mailing list until we're ready? (1) I wonder if you would be willing to add an extruded polygon, much as you have an extruded circle=cylinder. This would make our .rad files much much simpler and easy to read. Would "prism" be an appropriate name for this shape? (2) I force the use of "handles" in autoCAD and so the id of each entity is "entity-. in your "mod polygon id" requirement. The part comes from my having to extrude sides, top, and bottom, for plines, etc. (this makes the .rad files large and difficult to read and edit). Is the id *really* required? can they be redundant, in the event that different .rad files be brought into the scene. I notice that you do check for the existance of this identifier, but I have no idea how you use it. (3) The use of sphere/bubble, cylinder/tube etc., seem confusing. Could we formalize this to be something like sphere and -sphere, cylinder and -cylinder, prism and -prism, etc., to discuss inward versus outward pointing normals of solid primitives? (4) Ring is essentially a surface primitive while cylinder is a line primitive. What I like about ring is its "direction" and it would be so nice if it additionally handled "extrusion". It seems to me that polygon, cone, cylinder, and ring could all be generalized to use this xdir,ydir,zdir feature with extrusion, -- and again, it would simplify the input files tremendously. (5) As of today, my translator does not do traces, solids, insert, text, or block, but it does do line, point, circle, 3dpoly, 3dface, 3dline, and parts of pline. And, the interface allows one to query drawing entities, change files (for example, when outputing blocks to separate files), and to set system parameters, like default radius, extrusion length, and arc resolution. I hope these questions/suggestions are helpful. I have spent so much time with my students building models that we have not had the time to really work with the Radiance software. And so organization of models and automatically constructing .oconv files, material files, etc., has been a big objective of my translator. Let me know your feelings thus far. --Skip Date: Mon, 4 Nov 91 10:33:49 +0100 From: greg (Greg Ward) To: vanwyk@arc.cmu.edu Subject: Re: xRAD Hi Skip, Thanks for your input. It sounds like you have done a lot of work on this translator. I will respond to your questions in order (1) You can use the genprism command to get a more reasonable representation of a prism. I have endeavored to keep the primitive surface types in Radiance as simple and basic as possible, with higher order surface types supported by external generator programs. The following line in a Radiance scene file would expand to a collection of surfaces (aptly named) describing a 5-sided prism: !genprism red_plastic prism 3 10 4 -3 1 6 12 -l 0 0 5 The vertices of the base polygon lie in the XY plane, and are (10,4), (-3,1), and (6,12). The extrusion is straight up in Z, length 5. The ordering of the vertices means that this prism will have its surface normals pointing outwards. (2) The surface identifiers are used only in error messages so that the user can easily locate problematic sections of the input scene file. If a few polygons in the file have the same name, the user may still be able to find the one causing difficulties, but if all the polygons are named "joe", it's hopeless. Identifier names for modifier types (patterns, materials, etc.) are used to link to the surfaces they modify, but you may still reuse the modifier names if you choose. The most recent definition of a modifier is always the one which is used. (3) I like your suggestion of naming surface types with inward normals using -sphere instead of bubble, etc., and I wish I'd thought of that when I wrote the input language seven years ago, but I think it's a little too late to be making such fundamental changes. There are other decisions I would like to change as well, but out of respect for the work others have done with the software already, I leave well enough alone. One thing I have done (for the next release) is made the input for spheres and cones more forgiving. If given a negative radius for a sphere or bubble, for example, the programs invert the type and the radius and print a warning message instead of bombing. The same goes for cylinders, tubes, cones and cups. I did this with translator writers specifically in mind, following some suggestions made by Paul Bourke. (4) With the exception of the source and instance types, I see all the so-called surface types of Radiance as surface boundary primitives. The extrusion of a boundary would necessarily be a solid, and solids do not really have meaning in Radiance. Are you really asking for cones and cylinders whose ends are cut at oblique angles? -Greg Date: Mon, 4 Nov 91 11:08:28 EST From: vanwyk@arc.cmu.edu (Skip Van Wyk) To: greg@lesosun1.epfl.ch Subject: Re: xRAD Thanks, Greg. The problem with genprism as it now stands is its lack of generality. The vertices, if extended to [x,y,z], along with -l vecx vecy vecz or -l distance, could be very helpful to most modelers. AutoCAD lets users establish an arbitrary coordinate system on which to construct objects . . . for me to rotate those back to a world coordinate system before using genprism means one transformation, genprism, and then another transformation or xform . . . I'll alter your genprism to a new "genprismx" to accomplish the above. And, I think this is also the kind of contribution you hope users make! By the way, I have to do some daylighting calibrations. While in Stutgart at the Institut for Bauphysik, I was given a pre-release copy of their comparisons of Radiance with other lighting models. Have you seen it yet? And, I didn't really watch you very carefully this summer as you demonstrated "taking off" luminance values from the picture. I assume that one mustdo a hemispheric view, from a specific point of interest. Correct? Thanks, Skip Date: Mon, 4 Nov 91 17:15:36 +0100 From: greg (Greg Ward) To: vanwyk@arc.cmu.edu Subject: Re: xRAD Hi Skip, It should be possible to use genprism and pipe the output to xform to get any prism orientation you want. I don't know how easy it is to go from an AutoCAD coordinate system to the necessary translations and rotations for xform, but it should not be necessary to perform two transformations as I understand the problem. A cursor pick or drag followed by the 'l' command in ximage will display the luminance value for a point or area, respectively. It is not necessary to do a hemispherical view unless you want to know the illuminance at a point. In that case, you should probably use rtrace separately with the -i option. The next release of Radiance, which is due out this month, provides many more features for daylight calculation, including illuminance and daylight factor routines. I have not seen the Stuttgart report, which is surprising since we have been collaborating on this work for a couple of years now. I suppose I should ask them to send me a copy. -Greg =============================================================== MODELS Scene model data bank Date: Mon, 4 Nov 91 15:29:21 Z From: Augusto Sousa To: Greg Ward Subject: NFF Files Dear Greg, How are you since the rendering workshop in Barcelona? I hope that you are well. I am sending you this email because (if I well understood) you have 3D scenes for Ray-Tracing in the NFF format that we could get by ftp. How can we get them and, perhaps, add some more? Awainting the favour of your reply, Kind regards, A. Augusto de Sousa P.S. Let me know if I can help you in any thing. Date: Tue, 5 Nov 91 09:41:39 +0100 From: greg (Greg Ward) To: aa_sousa@inescn.pt Subject: Re: NFF Files Dear Augusto, Thank you for your letter. I do indeed have scene descriptions that you can pick up by anonymous ftp, but they are in my own Radiance format rather than NFF. I have a translator to go from NFF to Radiance, but not vice versa. You can pick up both Radiance and the scene descriptions from hobbes.lbl.gov (128.3.12.38). The README files should explain where everything is, but just to save time you should pick up ray1R4.tar.Z from the ftp directory if you want to run Radiance, and the scene descriptions are in various tar files in pub/models. There is also a collection of Radiance objects (furniture and the like) in pub/objects/gjward.tar.Z. I will send in a following message a PostScript form of the document describing Radiance's input format. If you have any descriptions to offer in NFF format, I invite you to deposit them in either the pub/models or pub/objects directories on hobbes. Please follow the directions in the README files contained therein, or ask me if you have any additional questions. -Greg ========================================================================= ART Radiance in the Arts Date: Wed, 6 Nov 91 08:55:13 +0100 From: greg (Greg Ward) To: raylist@hobbes.lbl.gov Subject: Radiance in the Arts Hello Everyone, Here is a question about Radiance in the art community that was sent to me today, and my response. If anyone wants to contact this person, please address it or cc to his e-mail. I would appreciate a copy also so I can post it later to the group. In related news, there is a fellow at IBM in New York (Dr. Cliff Pickover) who is collecting computer graphic art for a book. I can put you in touch with him if you are interested. -Greg ------------------------------ From: axolotl@socs.uts.EDU.AU Subject: Radiance in the fine arts? To: greg@hobbes.lbl.gov Date: Wed, 6 Nov 91 16:17:48 EST Greg, I was wondering if you knew of any examples where Radiance had been used in the fine arts? I fired up the 'podlife' model, and it looked great. So I'm curious to know if any more exist, or at least if Rad has been installed at any "fine arts" sites (whatever they may be)... I'm reluctant to use Radiance because it doesn't have the kind of massively anti-aliased, polished output that I need. (I know you can render it very large and scale it down, but this is clumsy, and isn't too good for animation). The soda-store image in your (88?) paper looks good- that's the sort of thing I'm after. I hear Sumant Pattanaik is going to use your modeling language for his PhD in Radiosity. Sounds good. -- Iain Sinclair University of Technology, Sydney axolotl@socs.uts.edu.au +61 2 2812552 irsincla@uts.edu.au +61 2 3301807 (fax) >From greg Wed Nov 6 08:35:08 1991 To: axolotl@socs.uts.EDU.AU Subject: Re: Radiance in the fine arts? Hello Iain, Thanks for the complement on the Pod Life sculpture. Cindy and I have done a couple of other "artsy" scenes, another (less sophisticated) sculpture and a decorated Christmas tree. There is a group in New Zealand that did some nice things with Radiance a long time ago. I don't know if they're still using it as I haven't heard from them recently, but you might contact them and ask them about their experiences. Here is their address: Richard Cranenburgh Auckland Technical Institute Private Bag C.P.O Auckland 1 Wellesley St. New Zealand I'm sorry, but I don't seem to have their e-mail or phone number, but maybe you can look it up. As for other "Art" colleges using the software, I don't know. I'm afraid that I don't run in those circles and I don't really know an art college from a business school from a hole in the wall. I will send your request to the mailing list, though, and perhaps we will get a response. I have done animations and high resolution anti-aliased images, and I don't really agree with your comments about Radiance not producing high quality output. The separation of rendering from filtering seems quite natural once you get used to it, and it provides greater control over the time/quality tradeoffs. I don't think that other programs do it any differently, they only take away some of the control. If you think Radiance is awkward for animation, you may be right. I have a C-shell script I could send you that calls all the necessary programs for a walk-through animation. At some point it would be nice to have a program to do it all from key frames, but for how often I would use it, it's probably not worth it for me. -Greg Date: Thu, 7 Nov 91 8:01:55 NDT From: pdbourke@ccu1.aukuni.ac.nz Subject: Re: Radiance in the Arts To: greg@lesosun1.epfl.ch > >From: axolotl@socs.uts.EDU.AU > Subject: Radiance in the fine arts? > To: greg@hobbes.lbl.gov > Date: Wed, 6 Nov 91 16:17:48 EST > > Greg, > > I was wondering if you knew of any examples where Radiance had been > used in the fine arts? I fired up the 'podlife' model, and it > looked great. So I'm curious to know if any more exist, or at least > if Rad has been installed at any "fine arts" sites (whatever they > may be)... Greg passed your note onto other possibly interested parties... We installed radiance about six months ago on our SG here. While we are one of the two Architecture Schools in NZ, we are known as the "design" school. An increasing number of strudents are looking at experimenting with Radiance although there has only been one "official" project being completed at the moment. We have written some generators, parametric textures, etc. > I'm reluctant to use Radiance because it doesn't have the kind of > massively anti-aliased, polished output that I need. (I know you can > render it very large and scale it down, but this is clumsy, and > isn't too good for animation). I also originally thought that his method of antialiasing was not so hot but I've changed my mind, it gives the user much more control in the end. Regarding animation, we have done quite a bit of this using Radiance. At the moment I have done most of it for scientific visualisation work. Although it requires the writting of code there are some nice things that can be done. In particular because many forms can be generated by mathematical expression, it is possible to create animation that transform object shapes easily. Also, texture animation is easy. Most of the stuff I've done has been camera path animation, otherwaise known as flight path animation. I have written a frame generator which I will eventually install on Gregs site, it takes a file of key frames (vp, vd and vu) and generates n calls to rpict with either linear, bezier, or spline interpolation (except spline is not working yet) I play most of my animations with QuickTime on the Mac. We have written a Super3D translator if that helps... A student is about to look at particle systems, applications, etc... I have talked to John Fairclough at the Elam school of fine arts here, he is interested but they don't yet have ethernet to their building. ------------------------------ Paul D. Bourke pdbourke@ccu1.aukuni.ac.nz (130.216.1.5) From: desilva@ced.berkeley.edu Subject: Radiance in the arts To: greg@hobbes.lbl.gov Date: Wed, 6 Nov 91 10:50:42 PST Hi greg, I don't know if you remember some stuff I did using radiance a about two years ago that involved a projecting slides onto a floating sculpture sort of thing? Anyway, I only have one of those images left on my mac and all the slides were sent off to competitions and never returned. Oh well, I should have backed them up to tape! I can definitely say that Radiance is definitely a powerful tool for artists. As for the animation stuff, appartently PD Bourke is working on a flight path program that will interpolate splines from key frames. How did you do the mmack animation? I have a few questions about ambient calculations. Any hints at all about choosing the right ambient parameters whould be of great help! I guess the the most basic question would be what options can I change on the second pass? It seems most logical that you can only change the resolution but I tried changing the -av and got some hatchlike marks in the shadow areas. When doing the first run I did it at a resolution of 200 x 200. Too low? I appears that the resolution doesn't matter because on the second pass, ambient values are also added. Is this because of the change in resolution or does it add values for each run? If so, then I suppose subsequent runs will increase the accuracy, albeit by small amounts. And a few more: Whats a good way of figuring out the -av value? Also, is -ab 2 worthwhile? and how what is the increase in time? Oh, and one more: In trying to properly estimate colors, I'm following the .3*r+.59*g+.11*b formula to get a reflectance value. The colors I'm getting are very saturated and not too close to my guess at what the reflectancy should be. Is there some standard set of values that I can look up to approximate the reflectancies? I also borrowed a light mate light meter that can be used to figure out the reflectivity of a surface by comparing it to a reference surface. The manual for the meter suggests using a 100% reflective board. Does such a thing exist? I understand the white paper is about 68% reflective. Also, would the reflectivity value I get from the meter correspond to the formula above and radiance? I apologize for unloading soooo many questions on you! thanks, deanan Date: Thu, 7 Nov 91 10:34:07 +0100 From: greg (Greg Ward) To: desilva@ced.berkeley.edu Subject: Re: Radiance in the arts Hi Deanan, Yes, I do remember your work with the famous art projections. In fact, I did save some of the Radiance picture files on tape for myself. I didn't keep everything, but I did keep the following: 40b1 Looking blue from ditch 40b2 Sculpture floating in blue 40b3 Closeup of sculpture in blue 40b4 Looking orange from ditch 40b5 Sculpture floating in orange 40b6 Closeup of sculpture in orange 40b7 View with robot arm 40o1 Long view of ditch 40o2 Looking down in ditch with Van Gough prominent Chas can get them off of tape if you like now, or I can get them when I get back. They have taken away our film recorder, so buying a new one is high on my list. When that happens, I can make as many copies as you like. I know about submissions dissappearing! Where is Xavier now, by the way? I did the mmack animation as well as a new animation sitting on 6 tapes in my desk drawer using a C-shell script. I can send you a shar file if you are interested. For fly-by animations, the program pinterp smooths out the animation at minimal cost by using z-buffer inbetweening. The only values that are safe to change on the second pass are -ad and -as. In fact, it may make sense to use slightly larger values for these parameters on the first pass so that you get more accurate results for the values that matter the most. I often use a resolution of only 64x64 on the first pass. I'm sure that 200x200 is more than adequate. Additional values will be added with each pass because slightly different nooks and crevices will be sampled each time, and some of these may need new values. The low-frequency first pass just puts in values that have a large field of influence, and these values are the most important for good appearance. Saturated colors are not very true to life. I try and avoid them myself. The value you get from a reflectance meter should match the .3*r+.59*g+.11*b value you use in Radiance. 99% reflectance standards are available from LabSphere at a cost of around $180. The address is in my file at LBL, but since I'm here in Switzerland that doesn't do me much good. -Greg From: Nick (Nikolaos) C. Fotis Subject: Re: Radiance in the Arts To: greg@lesosun1.epfl.ch Date: Fri, 8 Nov 91 18:08:26 EET > > I have done animations and high resolution anti-aliased images, and I don't > really agree with your comments about Radiance not producing high quality > output. The separation of rendering from anti-aliasing is quite natural > once you get used to it, and it provides greater control over the > time/quality tradeoffs. I don't think that other programs do it any > differently, they only take away some of the control. It would be VERY nice if you could write a small giude in how to do a nice, antialiased image. It's somewhat necessary for us to have separated tutorials for these small, but important details. > > If you think Radiance is awkward for animation, you may be right. I > have some a C-shell script I could send you that calls all the > necessary programs for a walk-through animation. At some point it > would be nice to have a program to do it all from key frames, which is > quite possible but for how often I use would use it, probably not worth > it for me. > > -Greg Perhaps we (the Royal "WE") could make an interface to 2-3 animation programs. I'm waiting for the new version of BRL-CAD, which says that provides articulated animation, so I'm interested in building an interface to it. (And to Rayshade 4.0 or greater). Greetings, Nick. Date: Mon, 11 Nov 91 09:46:38 +0100 From: greg (Greg Ward) To: nfotis@ithaca.ntua.gr Subject: Re: Radiance in the Arts Hi again. I agree that there should be some better hints on generating nice images. The so-called "ambient" calculation and its parameters are particularly difficult to master. I keep hoping to find the time to document this stuff, but the task is daunting. Next year I will be writing a real user interface to Radiance, and into it I will build a lot of my knowledge about how to properly run the programs. Still, documentation is inevitable at some point -- the bane of all programmers! -Greg ============================================================= RS6000 Compiling Radiance on the IBM RS/6000 Date: 9 Nov 91 10:22:00 PST From: cvetp035@csupomona.edu () Subject: Compiling Radiance on IBM RS6000 To: gjward@Csa2.lbl.gov (gjward) Hi Greg, Do you know if anyone has compiled Radiance on RS6000? I got a lot of errors when I tried to install it as a RISC machine. BTW, Radiance is running great on the Sun Sparcstation IPCs here. I'd love to run it on the RS6000 to compare the performance. Jack Date: Mon, 12 Aug 91 19:13:37 From: marc@innerdoor.austin.ibm.com (Marc Andreessen) To: GJWard@Csa2.lbl.gov Subject: Radiance on RS/6000 Greg - Unpacked Radiance last night and got it working under AIX 3.1 on IBM RS/6000 with X11; if you're interested in the port, here's what it took: o Using defines -DBSD -D_BSD -DBSD_INCLUDES along with those for SGI (-DSTRUCTASSIGN and -DALIGN=double). o Adding -lbsd to the final link stage for each executable. o Possibly minor changes to source (#ifndef'ing out malloc decls, etc); these are probably not necessary since I moved to BSD emulation after I'd made these changes, which probably makes the changes useless. If you're interested in a genuine, clean-as-possible port I'll redo it and send you the results. Also, src/px/Makefile makes reference to 'glimage', but glimage.c is missing from the distribution; can I get my hands on that? Otherwise I'll write my own... The package looks great - thanks for making it available. Marc -- Marc Andreessen Graphics Subsystem Development IBM Advanced Workstations Division marc@innerdoor.austin.ibm.com Date: Tue, 13 Aug 91 08:47:05 +0200 From: greg (Greg Ward) To: marc@innerdoor.austin.ibm.com Subject: Re: Radiance on RS/6000 Hello Marc, Glad to hear that you got it running OK. I've got some timings someone else did on the RS/6000 if you're interested: ----------------------- >From emo@ogre.cica.indiana.edu Tue Feb 26 14:47:25 1991 To: ray@hobbes.lbl.gov Subject: Misc. Radiance 1.3 benchmarks Program: rpict, version 1.3, Date: February 22, 1991 This benchmark involves the example 1000x1000 picture described in ./ray/examples/textures as rendered from the associated makefile, ./ray/examples/textures/Makefile. ----------------------------------------------------------------------------- (all times are in seconds) System Real User System ----------------------------------------------------------------------------- Sun-4/330 (ogre) 10:27.9 8:10.5 8.5 SGI Personal Iris (pico) 5:41.0 5:26.5 1.6 -IBM RS6000 model 320 (off-site) 4:19.2 4:13.9 0.3 +Stardent Titan-3000 (tuna) l 4:13.9 4:04.3 7.8 -IBM RS6000 model 540 (off-site) 2:50.3 2:45.2 0.2 *Stardent Titan-3000 (tuna) 1:52.2 1:45.7 4.8 ----------------------------------------------------------------------------- Legend: +[Note: The entire image was rendered on 1 processor] *[Note: Each processor renders 1/4 image, so this is the MAX of all timings. The -x, -y, -vv, and -vh parameters were adjusted accordingly.] -[Note: The IBM timings were performed by our IBM representative off-site.] System Configurations: Architecture Operating System RAM Processor # ----------------------------------------------------------------------------- Sun-4/330 SunOS Release 4.0.3_CG9 24 MB 20 MhZ SPARC (1) SGI Personal Iris IRIX System V Release 3.2 16 MB 20 MhZ R3000 (1) Stardent Titan-3000 Unix System V Release 3.0 32 MB 25 MhZ R3000 (4) IBM RS6000 model 320 Unix System V Release ? 16 MB 20 MhZ RS6000 (1) IBM RS6000 model 540 Unix System V Release ? ?? MB 30 MhZ RS6000 (1) ----------------------------------------------------------------------------- I would be happy to answer any questions pertaining to these timings. In no way am I suggesting that these timings are the best possible for a given architecture; rather, they were the ones I obtained and may or may not be repeatable at another site. No special fine-tuning was done either to the system or to Radiance before performing these timings. Each system was relatively quiescent and therefore had a minimal load average. eric --------------------------- I'm not sure how 1.3 compares to 1.4, but I don't expect there is much difference. I was wondering, though, why you chose to go the BSD route, when you could have more simply removed the BSD definition and gone from there? Oh well, whatever works, I always say. For some reason, glimage.c was clobbered in this distribution. It's not very sophisticated, so if you write a better one please let me know. -Greg Date: 13 Nov 91 13:18:00 PST From: cvetp035@csupomona.edu () Subject: Compiling Radiance on RS6000 To: gjward@Csa2.lbl.gov (gjward) Greg, thanks for forwarding the message from Marc Andreessen about compiling Radiance on RS6000. I've encountered serveral problems following his instructions, and my email doesn't seem to get through to him. I've included the email below. Hi Marc, Greg forwarded the message you sent him on how to compile Radiance 1.4 on the RS6000 running AIX 3.1. I followed your advice, but I ran into some problems. First, the compiler gave warnings about the macro fabs being redefined. I don't think this is serious. Then during the linking of psign, pvalue, pcompos, colorscale, prot, and pflip, the compiler complained that .logb, .scalb, and .finite were unresolved variables. Thus those programs were not made. Could you help me out? Thanks. Jack cvetp035@csupomona.edu PS, should I send questions about Radiance to ray@lbl.gov instead of directly to you? Date: Mon, 18 Nov 91 09:40:39 +0100 From: greg (Greg Ward) To: cvetp035@csupomona.edu Subject: Re: Compiling Radiance on RS6000 Hi Jack, I'm afraid that I know next to nothing about the IBM RS/6000. Perhaps you are not linking to all the necessary libraries. I suspect that you should add -lm to the compilation of those programs. On most Unix implementations it is unecessary, but on the IBM, who knows? -Greg Date: Mon, 18 Nov 91 10:19:56 +0100 From: greg (Greg Ward) To: csw22@seq1.keele.ac.uk Subject: Re: compile problems On some C compilers, there should be a space between the -L option and its argument. Try changing the -L../lib lines in the Makefiles to -L ../lib and see what happens. Also, did you install the library in the src/common subdirectory first? Makeall does this automatically, at least it should. -Greg ============================================================= NIGHTTIME Nighttime renderings From: Alexander Keith Barber Subject: night time rendering; mailing list To: greg@hobbes.lbl.gov Date: Fri, 15 Nov 91 3:43:28 CST Greg- Dwayne Fontenot and I have been using Radiance here at Rice U for the past few weeks, and it is exactly what we've been looking for. Dwayne works here at RAVL - the Rice Advanced Visualizaion Lab - and I'm a junior architecture major. We've been using the Radiance program to render projects that I've rendered in IBM's AES modeler. Radiance beats everything else by far for interior views! The raytracer in AES is powerful, but the setup to get a photorealistic rendered image is a pain in a the ass and then some. I still haven't produced any ray-traced images that look like I want them to. There is RayShadev.4, a great program, but it "just" rayshades; trying to render an image with natural light or with an interior view from external sources of light is not what RayShade was written for. Now that we have Radiance to play with, producing images of buildings that I or anyone else has designed is going to be a lot easier. I wanted to ask you about using Radiance with NIGHT TIME images. That is, we would like to try to render at night using the moon for a light source, along with any artificial lights that a particular building would need. I recently saw an issue of the French architecture magazine "L'architecure d'Aujourd'hui" - today's architecture - that had a lengthly special on night and light. There were pictures from old black and white films, there were shots of Notre-Dame in Paris lit by different schemes designed by local and inter- national firms, there were free-form projects throwing light around a pitch black setting. All of this got me very interested in reproding this kind of setting in Radiance. I would LOVE to render my last project from an exterior and interior point of view at night; it would be great to see them. We would like to know, therefore, how we should set up this kind of scene in Radiance. Any help you can give would be great. The other part of my subject is the mailing list. I would just like to be added to the list of users of Radiance and receive any info you send out to them. My physical mailing address here at school is: Alex Barber School of Architecture Rice University 6100 S Main Houston, TX 77005 My home phone is currently 713.795.4402. I can be emailed at barber@spanish. rice.edu. I would just like to finish by telling you that there has been nothing like seeing the views in Radiance of my projects, since this is the closest I will get to being "built" until I work for a firm someday. There is nothing like the confirmation of your architectural ideas and how you "see" your building than having a detailed rendering on the computer that matches your "vision" for that building. Thank you for writing this program and making it available over the Internet. -Alex Barber Date: Mon, 18 Nov 91 10:57:17 +0100 From: greg (Greg Ward) To: barber@ravl.rice.edu Subject: Re: night time rendering; mailing list Hi Alex, Thank you for your kind letter and words of encouragement. I have added your name and Dwayne's to the mailing list and you will get the announcement when I release version 2.0 of Radiance later this week. Regarding night time scenes in Radiance, you can define the moon like so: void light moon_brightness 0 0 3 12 12 12 moon_brightness source moon 0 0 4 xdir ydir zdir 0.5 Unfortunately, I have no idea how to calculate the actual location of the moon based on time of night and year. I only know its approximate radiance and size. You will have to figure out the xdir, ydir and zdir values yourself (or fake them for your convenience). What other information do you need for your night renderings? Do you need to know about light sources? That is a more complicated topic. I have included some example electric lights in the lib/source/ies subdirectory from which you can pick and choose. If you have a particular light fixture in mind, you will need to get an IES data file from the manufacturer and translate it using ies2rad, or build up the Radiance input files yourself by hand (a little tricky). You may also use the new 2.0 program lampcolor to compute the radiance of diffuse light sources if you just want something approximate. Let me know where you need further help. -Greg ============================================================= SUMANT Sumant Pattanaik's contributions From daemon Thu Nov 14 13:13:35 1991 To: "(Greg Ward)" Subject: Date: Thu, 14 Nov 91 17:34:18 +0530 From: sumant@shakti.ernet.in Status: RO Dear Greg, Its a very long time since I last communicated with U. Had some problems. (Occupational hazards U know.) Things have not straightened out yet. Got some breathing time for last two days. Managed to make a bit of progress in that radiosity stuff. One version is ready. I think I should release it to the Radiance users now. At least if the pressure from user group builds up I'll be able to add things to it. Otherwise it does not progress at all. One small thing. It'll be better if I generate output in the radiance output format. All I support now are UTAH RLE format, binary (....) and text ( ....). I am not too sure about radiance output format. If U have a write up handy pl mail it to me. Or any pointer to the radiance source would do. My distribution will have following things: 1. previewer ---- A better version of the earlier previewer. 2. rad ---- The radiosity package. It does full-matrix solution. I think I'll also be able to include the progressive solution soon. 2. radfilter ---- To convert radiance input data to the input format understood by "rad". Earlier I promised to send U the PostScript version of my MonteCarlo paper. Sorry that I havenot sent it yet. It is 167K. Shall I break it down and send it in pieces ? I dont hear much from HOBBES these days. Have U removed me from the mailing list ? ---- sumant (email : sumant@shakti.ernet.in) ------------------------------------------------------------------ Sumant Narayan Pattanaik N.C.S.T. Juhu, Bombay 400 049 From greg Mon Nov 18 10:10:40 1991 Date: Mon, 18 Nov 91 10:10:40 +0100 From: greg (Greg Ward) To: sumant@shakti.ernet.in Subject: Re: Status: RO Hello Sumant, Of course I haven't removed you from the mailing list! Maybe I had the wrong address, though. I changed it to sumant@shakti.ernet.in now. Did you get the announcement about test simulations available from hobbes? I mentioned you as a possible contributor. Did you ever finish your comparison runs? Do you subscribe to the global illumination mailing list? If not, you should write to Paul Heckbert and tell him I said you should be on it. I am about to release version 2.0 of Radiance. If you want to distribute your radiosity package from hobbes as well, I would be honored. I agree that having a user base forces you to be a little more thorough... As for your request on how to write Radiance pictures, below is a skeletal program to write out a floating point picture. You will need to link to the Radiance library, or individually to color.o, resolu.o and header.o. All this will make more sense when you pick up your copy of 2.0 (available both from hobbes and from dasun2.epfl.ch <128.178.62.2> by anonymous ftp). You might want to wait a few days, as I plan to make a couple of minor changes before announcing it. /*-----------------------------------------*/ #include #include "color.h" #include "resolu.h" computepicture(xmax, ymax, fp) /* compute and write picture */ int xmax, ymax; /* image resolution */ FILE *fp; /* output file */ { COLOR *scanout; int y; register int x; /* put format and resolution */ fputformat(COLRFMT, fp); putc('\n', fp); fprtresolu(xmax, ymax, fp); /* allocate scanline */ scanout = (COLOR *)malloc(xmax*sizeof(COLOR)); if (scanout == NULL) quiterr("out of memory"); /* produce image */ for (y = ymax-1; y >= 0; y--) { /* compute this scanline */ computscan(scanout, xmax, y); /* write it out */ if (fwritescan(scanout, xmax, fp) < 0) quiterr("error writing Radiance picture"); } /* clean up */ free((char *)scanout); if (fclose(fp) < 0) quiterr("error closing Radiance picture"); } /*------------------------------------*/ Please do send me your paper, either whole or in pieces. Thanks! -Greg ==================================================================== COMPILE Compile problems relating to X11 and malloc.c From: dirty@engin.umich.edu (Cameron Javad Esfahani) Date: Fri, 22 Nov 91 16:27:39 EST To: GJWard@Csa2.lbl.gov Subject: How to get Radiance to work with X11 Hello, In the makeall script, it asks you whether you have support for X10. If you answer no, and insert x11 in the "special" commandline arguments, I am getting a large number of errors. If I answer yes, I get a few errors. So my question is, if we have X11R4, what should I do to get it run under that? Do you know if the errors I am getting when I say yes when asked if I have support for X10 are just local errors? Thank you for any information you can give me. ---------------------------------------------------------------------------- Cameron Esfahani What can we do? dirty@engin.umich.edu We can go to the center of darkness. VizLab, USENET, Macintosh, Where's that? X-windows CAEN Support New Jersey. From greg Mon Nov 25 10:30:14 1991 Date: Mon, 25 Nov 91 10:30:08 +0100 From: greg (Greg Ward) To: dirty@engin.umich.edu Subject: Re: How to get Radiance to work with X11 Status: RO I'm sorry for the confusion. Just answer "no" to the X10 question, makeall makes the programs for X11 by default. I should have made this more clear. -Greg Date: Mon, 25 Nov 91 05:05:28 -0500 From: ugli To: "(Greg Ward)" Subject: Re: How to get Radiance to work with X11 Status: RO Actually, i tried that right after I mailed you. I feel a little sheepish now. I do wonder if there is better documentation other than the quick tutorial and the man pages. I haven't looked at the macintosh document. Does this tell me what I need to know about Radiance? Thanks ------------------------------------------------------------------------------- Cameron Esfahani What can we do? dirty@engin.umich.edu We can go to the center of darkness. VizLab, USENET, Macintosh, Where's that? X-windows CAEN Support New Jersey. From: apian@ise.fhg.de (Peter Apian-Bennewitz) Subject: rpict fails on hp720 To: gjward@Csa2.lbl.gov (Greg Ward) Date: Sat, 7 Dec 91 14:35:42 MEZ Dear Greg, rpict fails with bus error in src/common/readfargs.c line 80 . Workaround: ln -s ../common/bmalloc.c malloc.c in the rt directory. Hm. I guess you introduced you own malloc routines to speed things up. Please excuse my ignorance, but does it pay ? Peter Date: Sat, 7 Dec 91 08:57:22 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: rpict fails on hp720 Hi Peter, I'm really surprised that my malloc is not working on your HP. Do you know what the alignment size is? Do you know what the size of a double is? Can you run the following program on your machine for me? main() { printf("%d\n", sizeof(double)); } If the result is more than 8, then I might know what the problem is. Otherwise, I can only suppose that there is a bug unless you forgot to specify an HP when you ran makeall and the define -DALIGN=double did not get into the rmake command. Just to check, are you running make manually instead of rmake? This might explain why the correct definitions are not going in for your machine. -Greg Date: Sat, 7 Dec 91 09:18:37 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: rpict fails on hp720 Hi Peter, In reply to your question about malloc, I wrote my own both for speed and storage efficiency reasons. As it turns out, there are some very good and some very bad implementations of malloc on the systems out there. I don't claim that mine is the best, or even that it's much better than average. It just performs well with my programs, which differentiate between memory that might be freed later and memory that will be kept for the life of the process (malloc vs. bmalloc). It also avoids some of the computational overhead in some of the more primitive malloc's and some of the unreasonable storage overhead in others. Last time I checked, BSD 4.3 was using a version of malloc that for requests just near half the system page size (4k requests on the Sun) ended up using twice the system page size. That's a memory utilization of 25%! On average, my version of malloc gets a memory utilization of 75%, which isn't wonderful, but it makes up for this in raw speed, processing memory requests and free's and realloc calls faster than any other malloc that I know of. And the alternative call, bmalloc, is not only fast, but it gets nearly 100% memory utilization, limited only by the alignment size of the machine. The best malloc I've seen is the one currently used by Sun, which is reasonably fast while providing very good memory utilization. Best of all, the Sun implementation coalesces memory as it is freed, something that is pretty difficult to do. I only recently added this capability to my malloc routines, and it doesn't work nearly as well as Sun's code. Unfortunately, the Sun routines are very complicated and not everybody uses their algorithm so I figure I'm better off being conservative on other people's machines. -Greg From: Peter Apian-Bennewitz Subject: Re: rpict fails on hp720 To: "Gregory J. Ward" Date: Sun, 8 Dec 91 15:08:02 MEZ Dear Greg, > what the alignment size is? Do you know what the size of a double is? Can > you run the following program on your machine for me? here's the output: (looks pretty normal to me) datatype bytes Size of Integer : 4 Size of short Integer : 2 Size of long Integer : 4 Size of unsigned Integer : 4 Size of long unsigned Integer: 4 Size of char : 1 Size of float : 4 Size of double : 8 I haven't checked the bus error in detail, however when using xdb its possible to check the contents of the variable without error. a read acces seems to work !??!?!?! Currently its all WIHIH to me (WhatInHellIsHappening). When running, the programs complains about "exp: range error". ?? Yet another question: The Hp720 b/w flavour comes with X11 visual type "GrayScale", same thing as "PseudoColor" (8 planes), but b/w. That's the only visual the server supports. I'd be more than happy to write an add-on, but before jumping into source code, how much work would that be (beside the X11 stuff). Thanks a lot for the malloc explanation, it looked like an xtra to me, but your program does use incredible small amounts of memory when running, so it's probably a good thing. Peter -- Date: Sun, 8 Dec 91 18:30:45 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: rpict fails on hp720 Hi Peter, The only thing I can think of is that you compiled with make instead of rmake or makeall and the wrong defines were used on malloc.c. Did you check this possibility? You can remove malloc.o and run rmake in the rt subdirectory and you should get a cc line with -DALIGN=double in it. (Don't forget to relink malloc.c instead of bmalloc.c.) Were you serious about Radiance not using much memory? Obviously, you haven't gotten to any of the larger models. What model were you rendering when you got the exp range error? This message often shows up when there is an underflow condition, something we would all like to ignore (eg. exp(-500) = 0), but some math libraries won't let us. Were you using a model with a call to exp() in a library file, or were you using gensky? It could have come from that. If it is coming from internal underflows of exp(), I would like to know about it so I could avoid this message in the future. With regards to the GrayScale visual, I should be checking for that in x11.c I suppose, but I just assumed that all grayscale servers would accept the PseudoColor visual type. Rview and ximage should work with greyscale displays using the -b options of each. Unless you add in a test to allow for it, though, both programs will insist on getting PseudoColor visuals. Personally, I think the way X11 handles the various display possibilities sucks. Testing for every possible configuration is a programming nightmare. Since I'm not exactly sure how a grayscale monitor is supposed to map its values, I don't know if you would have to add anything besides the one test to rt/x11.c and px/x11raster.c. -Greg ===================================================================== OPENWINDOWS Date: Fri, 11 Oct 91 16:03:28 Z From: Environmental Design Unit To: greg@lesosun1.epfl.ch Subject: RADIANCE Greg, I've got v2.0 up and running, no trouble at all thanks to your installation script. I haven't had the time to do anything interesting with it yet - other (thermal) work has taken priority - but I hope to start a programme of daylighting simulation work in the near future. In the meantime could you advise on the best way to get hold of the PLINK translator. My supervisor, Kevin Lomas, spoke to Raphael Compagnon about this at the PLEA conference. Perhaps we should also get hold of SUPERLITE and include it in any validation work we may do. Any ideas? The DF contour in RADIANCE v2.0 is a great help. However, for direct visual comparison of different cases, fixing of the contour levels, at say 1, 2, 4, 10, 20, 40%, would make evaluation much easier. I think i've figured out how the routine works, but I can't see how the levels could be fixed. On a more trivial note, users of OpenWindows may find some bindings helpful. So far, i've bound *.pic, *.rad & *.oct to their own icons. Application ximage is bound to *.pic and getinfo to *.oct (which of course appears in the console). Simple stuff, but it does speed things up being able to use the file manager to browse through pic files and 'get the info' on oct files. You may wish to pass this on to Sun - SunOS 4.1.1 users of RADIANCE. Hope you enjoyed your vist to the UK. -John Date: Mon, 14 Oct 91 12:18:30 +0100 From: greg (Greg Ward) To: edu@leicester-poly.ac.uk Subject: Re: RADIANCE Hi John, I did have a pleasant visit to the UK. I'm sorry again that our schedules didn't work well together. I have forwarded your request for a copy of PLINK and Superlite to Raphael, and he should send you one shortly. I forget whether you need to go through official channels or if we can just send you a copy. It would be nice to include it in your validation studies. I am glad you have had some luck with the dayfact script. I have been rather disappointed in the output I have gotten, which seems to be of low quality due to the abnormal use of pfilt to enlarge a tiny image. Anyway, I think you are right that control of the output is critical for comparisons, so here is a fixed-up version of the script that always sets the maximum value to 100%. You can alter this to whatever you like with the -s option (see the falsecolor manual page), and the -n option will determine how many contours you will get. I know zip-diddley about OpenWindows, but I will put your suggestions in the next digest. Thanks! -Greg Date: Mon, 21 Oct 91 15:19:00 Z From: Environmental Design Unit To: greg@lesosun1.epfl.ch Subject: RADIANCE and OpenWindows Hi Greg, Here's the trivial mods (accelerators?) to the OpenWindows filemanager. The aesthetics of the icons are questionable! Yours, -John Using RADIANCE in OpenWindows 21 Oct 1991 ----------------------------- Modifications to filemanager bindings Edit the file rad.filetype giving the applications "ximage" and "getinfo" the correct path name from root. Do the same for the icons pic, rad and oct. Copy rad.filetype to your home directory and put the icons in your icon directory. Goto your home directory and type the command: cat .filetype rad.filetype >> .filetype Then remove rad.filetype. Quit the filemanager (if you have one running) and restart it. All *.pic, *.rad and *.oct files will be identified by their own icon. Double clicking (SELECTING) with the left mouse button on a *.pic icon in the filemanager will execute "ximage" and put that picture up on the screen. The same on an *.oct icon will execute "getinfo", writing the output to the console. To all *.rad files the print script "pr" (paging) has been added. You can change the colours by re-setting rgb values (5th argument on each line). (You can assign whatever applications, print-scripts and icons to a file which, by default appears as a "text" file in the filemanager. Giving executables new icons may cause the icon to almost fade-out, depending on the colour set, when selected - instead of changing to black like it should. This appears to be a bug, originating deep in the system software.) -John Mardaljevic e-mail: edu@uk.ac.leicp oct.icon 644 152 12 1115 5100547142 5533 /* Format_version=1, Width=32, Height=32, Depth=1, Valid_bits_per_item=16 */ 0x3FFF,0xFFFC, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0204, 0x2000,0x0604, 0x21E1,0xCF04, 0x2332,0x6604, 0x2336,0x6604, 0x2336,0x0604, 0x2336,0x0684, 0x2337,0x2704, 0x21E3,0xC304, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x00C4, 0x2000,0x0F04, 0x2000,0x7004, 0x2000,0xA004, 0x2001,0x1004, 0x2002,0x0C04, 0x2004,0x0204, 0x2FF8,0x0004, 0x2004,0x00C4, 0x2004,0x0704, 0x2002,0x1C04, 0x2001,0x6204, 0x2000,0x8184, 0x2000,0x8044, 0x2000,0x4004, 0x2000,0x0004, 0x3FFF,0xFFFC files will be identified by their own icon. Double clicking (SELECTING) with the left mouse button on a *.pic icon in the filemanager will execute "ximage" and put that picture up on the screen. The same on an *.oct icon will execute "getinfo", writing the output to the console. To all *.rad files the print script "pr" (paging) has been added. You can change the colours by re-setting rgb values (5th argument on each line). (Yopic.icon 644 152 12 1115 5100547142 5521 /* Format_version=1, Width=32, Height=32, Depth=1, Valid_bits_per_item=16 */ 0x3FFF,0xFFFC, 0x2000,0x0004, 0x2000,0x0004, 0x2FE3,0xC3D4, 0x2671,0x8634, 0x2631,0x8C14, 0x2631,0x8C14, 0x2631,0x8C04, 0x27E1,0x8C04, 0x2601,0x8C04, 0x2601,0x8C14, 0x2601,0x8634, 0x2F03,0xC3E4, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2007,0xF004, 0x201F,0xFC04, 0x207F,0xFF04, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x207F,0xFF04, 0x201F,0xFC04, 0x2007,0xF004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x3FFF,0xFFFC files will be identified by their own icon. Double clicking (SELECTING) with the left mouse button on a *.pic icon in the filemanager will execute "ximage" and put that picture up on the screen. The same on an *.oct icon will execute "getinfo", writing the output to the console. To all *.rad files the print script "pr" (paging) has been added. You can change the colours by re-setting rgb values (5th argument on each line). (Yorad.filetype 644 152 12 406 5100554462 6372 *.pic,,/CORRECT_PATH_NAME/bin/ximage $FILE,/CORRECT_PATH_NAME/icons/pic.icon,255 215 0,,53,, *.rad,,,/CORRECT_PATH_NAME/icons/rad.icon,219 112 147,pr $FILE | lpr,53,, *.oct,,/CORRECT_PATH_NAME/bin/getinfo $FILE,/CORRECT_PATH_NAME/icons/oct.icon,155 200 90,,53,, 0x8634, 0x2F03,0xC3E4, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2007,0xF004, 0x201F,0xFC04, 0x207F,0xFF04, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x207F,0xFF04, 0x201F,0xFC04, 0x2007,0xF004, 0rad.icon 644 152 12 1115 5100547142 5514 /* Format_version=1, Width=32, Height=32, Depth=1, Valid_bits_per_item=16 */ 0x3FFF,0xFFFC, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0384, 0x2000,0x0184, 0x2000,0x0184, 0x2377,0x8F84, 0x21BC,0xD984, 0x2180,0xD984, 0x2187,0xD984, 0x218C,0xD984, 0x218C,0xD984, 0x23C7,0x6EC4, 0x2000,0x0004, 0x2FFF,0xFFF4, 0x2800,0x0014, 0x2800,0x0014, 0x2800,0x0014, 0x2800,0x0014, 0x2FFF,0xFFF4, 0x2000,0x0004, 0x2FFE,0x0E04, 0x2802,0x3184, 0x2802,0x2084, 0x2802,0x4044, 0x2802,0x4044, 0x2802,0x4044, 0x2802,0x2084, 0x2802,0x3184, 0x2FFE,0x0E04, 0x2000,0x0004, 0x3FFF,0xFFFC 0x3FFF,0xFFFC, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0384, 0x2000,0x0184, 0x2000,0x0184, 0x2377,0x8F84, 0x21BC,0xD984, 0x2180,0xD984, 0x2187,0xD984, 0x218C,0xD984, 0x218C,0xD984, 0x23C7,0x6EC4, 0x2000,0x0004, 0x2FFF,0xFFF4, 0x2800,0x0014, 0x2800,0x0014, 0x2800,0x0014, 0x2800,0x0014, 0x2FFF,0xFFF4, 0x2000,0x0004, 0x2FFE,0x0E04, 0x2802,0x3184, 0x2802,0x2084, 0x2802,0x4044, 0x2802,0x4044, 0x2802,0x4044, 0Using RADIANCE in OpenWindows 21 Oct 1991 ----------------------------- Modifications to filemanager bindings Edit the file rad.filetype giving the applications "ximage" and "getinfo" the correct path name from root. Do the same for the icons pic, rad and oct. Copy rad.filetype to your home directory and put the icons in your icon directory. Goto your home directory and type the command: cat .filetype rad.filetype >> .filetype Then remove rad.filetype. Quit the filemanager (if you have one running) and restart it. All *.pic, *.rad and *.oct files will be identified by their own icon. Double clicking (SELECTING) with the left mouse button on a *.pic icon in the filemanager will execute "ximage" and put that picture up on the screen. The same on an *.oct icon will execute "getinfo", writing the output to the console. To all *.rad files the print script "pr" (paging) has been added. You can change the colours by re-setting rgb values (5th argument on each line). (You can assign whatever applications, print-scripts and icons to a file which, by default appears as a "text" file in the filemanager. Giving executables new icons may cause the icon to almost fade-out, depending on the colour set, when selected - instead of changing to black like it should. This appears to be a bug, originating deep in the system software.) -John Mardaljevic e-mail: edu@uk.ac.leicp oct.icon 644 152 12 1115 5100547142 5533 /* Format_version=1, Width=32, Height=32, Depth=1, Valid_bits_per_item=16 */ 0x3FFF,0xFFFC, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0204, 0x2000,0x0604, 0x21E1,0xCF04, 0x2332,0x6604, 0x2336,0x6604, 0x2336,0x0604, 0x2336,0x0684, 0x2337,0x2704, 0x21E3,0xC304, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x00C4, 0x2000,0x0F04, 0x2000,0x7004, 0x2000,0xA004, 0x2001,0x1004, 0x2002,0x0C04, 0x2004,0x0204, 0x2FF8,0x0004, 0x2004,0x00C4, 0x2004,0x0704, 0x2002,0x1C04, 0x2001,0x6204, 0x2000,0x8184, 0x2000,0x8044, 0x2000,0x4004, 0x2000,0x0004, 0x3FFF,0xFFFC files will be identified by their own icon. Double clicking (SELECTING) with the left mouse button on a *.pic icon in the filemanager will execute "ximage" and put that picture up on the screen. The same on an *.oct icon will execute "getinfo", writing the output to the console. To all *.rad files the print script "pr" (paging) has been added. You can change the colours by re-setting rgb values (5th argument on each line). (Yopic.icon 644 152 12 1115 5100547142 5521 /* Format_version=1, Width=32, Height=32, Depth=1, Valid_bits_per_item=16 */ 0x3FFF,0xFFFC, 0x2000,0x0004, 0x2000,0x0004, 0x2FE3,0xC3D4, 0x2671,0x8634, 0x2631,0x8C14, 0x2631,0x8C14, 0x2631,0x8C04, 0x27E1,0x8C04, 0x2601,0x8C04, 0x2601,0x8C14, 0x2601,0x8634, 0x2F03,0xC3E4, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2007,0xF004, 0x201F,0xFC04, 0x207F,0xFF04, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x207F,0xFF04, 0x201F,0xFC04, 0x2007,0xF004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x3FFF,0xFFFC files will be identified by their own icon. Double clicking (SELECTING) with the left mouse button on a *.pic icon in the filemanager will execute "ximage" and put that picture up on the screen. The same on an *.oct icon will execute "getinfo", writing the output to the console. To all *.rad files the print script "pr" (paging) has been added. You can change the colours by re-setting rgb values (5th argument on each line). (Yorad.filetype 644 152 12 406 5100554462 6372 *.pic,,/CORRECT_PATH_NAME/bin/ximage $FILE,/CORRECT_PATH_NAME/icons/pic.icon,255 215 0,,53,, *.rad,,,/CORRECT_PATH_NAME/icons/rad.icon,219 112 147,pr $FILE | lpr,53,, *.oct,,/CORRECT_PATH_NAME/bin/getinfo $FILE,/CORRECT_PATH_NAME/icons/oct.icon,155 200 90,,53,, 0x8634, 0x2F03,0xC3E4, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0004, 0x2007,0xF004, 0x201F,0xFC04, 0x207F,0xFF04, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x20FF,0xFF84, 0x207F,0xFF04, 0x201F,0xFC04, 0x2007,0xF004, 0rad.icon 644 152 12 1115 5100547142 5514 /* Format_version=1, Width=32, Height=32, Depth=1, Valid_bits_per_item=16 */ 0x3FFF,0xFFFC, 0x2000,0x0004, 0x2000,0x0004, 0x2000,0x0384, 0x2000,0x0184, 0x2000,0x0184, 0x2377,0x8F84, 0x21BC,0xD984, 0x2180,0xD984, 0x2187,0xD984, 0x218C,0xD984, 0x218C,0xD984, 0x23C7,0x6EC4, 0x2000,0x0004, 0x2FFF,0xFFF4, 0x2800,0x0014, 0x2800,0x0014, 0x2800,0x0014, 0x2800,0x0014, 0x2FFF,0xFFF4, 0x2000,0x0004, 0x2FFE,0x0E04, 0x2802,0x3184, 0x2802,0x2084, 0x2802,0x4044, 0x2802,0x4044, 0x2802,0x4044, 0 ~s Radiance Digest, v2n1 Dear Radiance Users, Here once again we have a culling of e-mail exchanges to share. I hope by now that most of you have picked up version 2.0 of the program, which seems mostly stable except for one or two minor glitches. Please check also the previous digest, v2n0, if you have not seen it already. As always, back issues of the digest are available via anonymous ftp from hobbes.lbl.gov (128.3.12.38) in the pub/digest directory. Here is a table of contents that you can use for finding the sections you are interested in. Use the search string /^==*$/ to skip to the next section. BUG - A memory bug in rpict DAYLIGHT - Daylight Scripts and TIM's ALIASING - Anti-aliasing in Radiance (again?) LANGUAGE - Radiance input language definitions NEXT - Radiance compilation on the NeXT By the way, if anyone has need of some first rate consulting or training on Radiance, I have someone I can recommend (besides myself!). -Greg =========================================================== BUG - A memory bug in rpict Date: Wed, 11 Dec 91 00:59:10 MED From: bojsen@moria (Per Bojsen) To: greg@hobbes.lbl.gov Subject: Bug in rpict (rpict.c)? Hi Greg, [I'm the guy working on the Amiga port of Radiance if you don't remember me.] A couple of weeks ago I picked up Radiance 2.0, and now have a working port. I think I may have discovered a bug in rpict, specifically the rpict.c module, though. Rpict crashes when run with anything other than `-sp 1'. Rpict overwrites (or rather underwrites ;-) memory just before a malloc()'d buffer. The amount of bytes overwritten is proportinal to the `-sp' setting. On the Amiga such overwriting is nasty because the free memory list will be mangled. I snooped around in the source a bit to find something that depends on `-sp', i.e., the psample variable. I found something in the fillscanline() routine that may be the cause of the bug. Take a look on fillscanline(): fillscanline(scanline, zline, sd, xres, y, xstep) /* fill scan at y */ register COLOR *scanline; register float *zline; register char *sd; int xres, y, xstep; { static int nc = 0; /* number of calls */ int bl = xstep, b = xstep; double z; register int i; z = pixvalue(scanline[0], 0, y); if (zline) zline[0] = z; /* zig-zag start for quincunx pattern */ for (i = ++nc & 1 ? xstep : xstep/2; i < xres-1+xstep; i += xstep) { ^^^^^^^^^ if (i >= xres) { xstep += xres-1-i; i = xres-1; } z = pixvalue(scanline[i], i, y); if (zline) zline[i] = z; if (sd) b = sd[0] > sd[1] ? sd[0] : sd[1]; b = fillsample(scanline+i-xstep, zline ? zline+i-xstep : NULL, ^^^^^^^^^^^^^^^^ i-xstep, y, xstep, 0, b/2); if (sd) *sd++ = nc & 1 ? bl : b; bl = b; } if (sd && nc & 1) *sd = bl; } Now, every other call of fillscanline() will have `i' start with xstep/2 in the for loop. In the call to fillsample() the first parameter is `scanline + i - xstep', i.e., `scanline - xstep/2' (xstep is even, if xstep is odd it will be `scanline - xstep/2 - 1', I think). If xstep is greater than 2 (it is 6 for -sp 4), the pointer represented by `scanline + i - xstep' will point to some memory (on reasonable systems, anyway) before the scanline array. If the fillsample() routine writes to colline[0], for example, we have found a bug. Could you try to look into this and confirm if its indeed a bug? I'll try to change some things to see if my problem goes away. Thanks for making your work available! -- "There had been something loose about the // Greetings from Per Bojsen station dock all morning, skulking in // amongst the gantries and the lines and the \\// cbmehq!lenler!bojsen canisters which were waiting to be moved ..." \/ pb@id.dth.dk Date: Wed, 11 Dec 91 01:15:06 MED From: bojsen@moria (Per Bojsen) To: greg@hobbes.lbl.gov Subject: Bug in rpict.c fillscanline() I just tried a simple thing: I malloc()'d a somewhat larger buffers for the scanlines and pointed the scanbar[] pointers into these larger buffers to allow for the overwrite of the memory before the buffer. This made the symptom of the bug go away (no crash). So I'm pretty certain that what I described in my previous mail must be a bug. The question is: how do I fix it? -- "There had been something loose about the // Greetings from Per Bojsen station dock all morning, skulking in // amongst the gantries and the lines and the \\// cbmehq!lenler!bojsen canisters which were waiting to be moved ..." \/ pb@id.dth.dk From greg Tue Dec 10 17:48:15 1991 Return-Path: Date: Tue, 10 Dec 91 17:48:12 PST From: greg (Gregory J. Ward) To: bojsen@moria Subject: Re: Bug in rpict (rpict.c)? Status: RO Hi Per, Of course I remember you. There aren't many people with your kind of nerve, going where no programmer has gone before and all that. You have indeed found a bug. A bit of stupidity on my part after the last so-called "enhancement" to my sampling code. I guess it doesn't show up on most machines (including mine) because the memory doesn't get freed until after the program is done. Anyway, here is the routine returned to its original intent, and thanks for your thorough analysis of the problem!! -Greg ----------------- ------- rpict.c ------- 4c4 < static char SCCSid[] = "%Z%%M% %I% %G% LBL"; --- > static char SCCSid[] = "@(#)rpict.c 2.3 12/10/91 LBL"; 280,281c280,285 < b = fillsample(scanline+i-xstep, zline ? zline+i-xstep : NULL, < i-xstep, y, xstep, 0, b/2); --- > if (i <= xstep) > b = fillsample(scanline, zline, 0, y, i, 0, b/2); > else > b = fillsample(scanline+i-xstep, > zline ? zline+i-xstep : NULL, > i-xstep, y, xstep, 0, b/2); From bojsen@dc.dth.dk Thu Dec 12 06:13:23 1991 Return-Path: Date: Thu, 12 Dec 91 15:12:01 +0100 From: bojsen@dc.dth.dk (Per Bojsen) To: greg@hobbes.lbl.gov Subject: bug and fix Status: RO Hi Greg. I've incorporated the bug fix now and it works like a charm! Thanks for acting so promptly. A question regarding your special malloc() implementation. Do you depend on the memory added by subsequent sbrk() calls to be contiguous with the already allocated memory? The problem is that the sbrk() that comes with the SAS/C compiler is *not* compatible with UNIX in that regard (an can never be due to the way memory allocation works on the Amiga); every time sbrk() is called you get a new memory block that is uncontiguous with the rest. Your malloc() does seem to work, though. I just want to be sure there's no hidden danger. -------------------------------------------------------------------------------- Per Bojsen The VLSI Research Group EMail: bojsen@dc.dth.dk MoDAG Technical University of Denmark -------------------------------------------------------------------------------- From greg Thu Dec 12 09:46:09 1991 Return-Path: Date: Thu, 12 Dec 91 09:45:59 PST From: greg (Gregory J. Ward) To: bojsen@dc.dth.dk Subject: Re: bug and fix Status: RO Hi Per, My malloc does indeed work better if sbrk() returns contiguous blocks of memory, but it does not depend on it. -Greg ============================================================== DAYLIGHT - Daylight Scripts and TIM's [TIM stands for Transparent Insulation Materials -- if you don't know what it is than you probably wouldn't care. -G] Date: Wed, 11 Dec 91 13:59:32 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: RE: Questions Hi Peter, > Modelling rooms with TIM windows looks very promising and take most > of my time beside end-of-year paperwork etc. Raphael Compagnon recently started looking at these, and I tried to help him out with the BRDF description of a certain kind of TIM, the kind made up of many closely packed plastic cells aligned perpendicular to the window surface. Here is the .cal file I made: -------------------------------- { Calculate BRTDF of Transparent Insulation Materials made up of many small tubes packed tightly together. 29 Nov 1991 Greg Ward and Raphael Compagnon Apply with following BRTDfunc: mod BRTDfunc tim1 10 0 0 0 stran stran stran brtdf brtdf brtdf tim1.cal 0 7 0 0 0 R T 1 K where: R = diffuse reflectance when Ktan_t is zero T = total transmittance divided by (1-R) K = ratio of tube length to diameter } Ktan_t = A7 * Sqrt(1-Rdot*Rdot)/Rdot; stran = if(1-Ktan_t, 2/PI*Acos(Ktan_t) - Ktan_t/PI*Sqrt(1-Ktan_t*Ktan_t), 0); brtdf(lx,ly,lz) = (1-stran)/PI; ------------------------------------- You might like to ask Raphael how it's going. His e-mail is compagnon@eldp.epfl.ch. > If you like, an user-id at ISE would be no problem at all. > e.g., I found access to man pages of other machines is fine sometimes. Sure, when you get time could you do this for me? Also, give me some details and a model so I can reproduce the error you got from readfargs. > I've got a question, I don't dare to ask, because it shines bright > light on my ignorance: > -ad N is the number of random rays sent out into the hemisphere to > look for light coming from other surfaces > -as N if two of these rays differ, N other rays are sent in the > directions between them > -ar and -aa specify what happens when one of these rays hit an area > on a surface where there are to ambient values (yet). > Either this initiates a new hemisphere sampling or the value > is interpolated by using other ambient values on that the > surface. -aa spicifies the threshold when a new hemisphere > sampling is started. > As for the moment, am I totally lost or somehow on the right track ? It seems that you understand it pretty well to me. I would add the following: -ad N is the number of INITIAL rays sent out into the hemisphere to look for light coming from other surfaces -as N is the number of ADDITIONAL rays sent out to reduce variance in the initial hemisphere sample, based on the assumption that the initial sample captured all significant intensity gradients -ar and -aa specify what happens when one of these rays hit an area on a surface where there are to ambient values (yet). Either this initiates a new hemisphere sampling or the value is interpolated by using other ambient values on that the surface. -aa specifies the threshold when a new hemisphere sampling is started. -ar specifies a maximum resolution, past which the -aa value will start to be relaxed. The resolution is computed by the global cube size (as determined by oconv and displayed by getinfo -d on the octree) divided by the -ar parameter. -Greg From: Environmental Design Unit Date: Tue, 10 Dec 91 15:18:30 GMT To: ray@hobbes.lbl.gov Hi Greg, I thought i'd get back to you about the daylight factor scripts. For comparison purposes, contour lines are preferable to bands. However, when I switch from one to the other, the lines appear to be mid-way between the bands. I would have expected the line to overlap the band (or be it's leading edge) - also the legend arrangement is different. It does look like one scheme is giving the mid-way values of the other. It's a small point but I would like to clear it up before I start to compare results with other programs. The modification to dayfact to allow user fixed contour levels works fine - but ... it's still not ideal. Is their a way around having fixed increment scaling, i.e. 1,3,5,7,9, and so on? Levels of 1,2,4,6,10,15,20 etc. would be better. More generally, is v2.0 different in any way from the beta release I obtained from you? Also, could you briefly explain the changes to the source function for windows? Did you get the OpenWindows file-manager mods for RADIANCE, I hadn't actually e-mailed a tar file before (or since) so I just sent it off as normal mail, hoping that's how it is done. Hope all is well, -John Date: Thu, 12 Dec 91 10:05:17 PST From: greg (Gregory J. Ward) To: edu@leicester-poly.ac.uk Subject: Re: RADIANCE Hi John, Yes, the contour lines do appear midway between values rather than at those values like the bands. Currently, there is no way to specify exact contour levels an irregular intervals using this script. I would have to rewrite it significantly, which I may do when I find some time. In the meantime, I would like you at least to have the latest version of the dayfact script. I made some changes following release 2.0 to correct an error in the illuminance contour calculation and add a new ability to calculate daylight savings. Unfortunately, I do not remember exactly when I gave you the beta copy so I don't know how much I changed since then. I did get the OpenWindows modifications you sent me, and thank you. Did you not receive the latest Radiance Digest? In it I included your mods for other OpenWindows users. -Greg From: Environmental Design Unit Date: Mon, 16 Dec 91 11:25:24 GMT To: greg@hobbes.lbl.gov Subject: Radiance Greg, Thanks for the reply. I sort of guessed that mods to dayfact would not be trivial. Yes I did get the digest with my OpenWindows stuff, but it did look garbled. Changing the subject altogether, is RayShade a shading analysis program? We have on occasion used a heliodon to give shading info for our thermal analysis work - despite the complexity of current 'state of the art' programs, the treatment given to sun-patching is still rather simplistic and best results are obtained if the user tweaks the input a bit. What has this to do with Radiance? Well, must confess, I used your program in a rather trivial mode to look at the shading effectiveness of bridge structures in a proposed atrium design. What surprised me was how quickly I could generate an adequate description with a load of genbox and xform commands and a bit of vi'ing. A couple of simple shell-scripts to generate the oct and pic files and Bingo! I admit, it's a bit like using a CRAY to work out the grocery bill, but it's quick and simple. In fact, i'd be amazed to find a program which does it more efficiently - a PC based commercial package we have provides no contest. I know this is very much a side issue but I thought i'd let you know anyway. Regards, -John P.S. As you've no doubt guessed my daylighting project has been put back yet another month by other work commitments. Date: Tue, 17 Dec 91 09:46:41 PST From: greg (Gregory J. Ward) To: edu@leicester-poly.ac.uk Hi John, Yes, I think the OpenWindows stuff you sent may have been garbled. I couldn't tell myself because I didn't know what it was supposed to look like! I suggest creating a compressed tar file and uploading it to the pub/libraries directory on hobbes.lbl.gov (128.3.12.38) by anonymous ftp. The libraries directory is exactly right, but I don't have one that is so it will have to do. I am glad that you had some success using Radiance for your shadowing calculation. I agree that most of the work is getting the geometry right. I have used vi, xform and genbox (and gensurf, genprism, etc.) to create my models for many years now. I still don't use a CAD system, for better or worse. RayShade was not made specifically for shadow calculation, although it probably does it just as well as Radiance. I don't think you get a solar position program like gensky or anything like genbox, but RayShade does provide a few more surface primitive types. I should stop talking about it, though, since I really don't know that much about the software. -Greg ======================================================= ALIASING - Anti-aliasing in Radiance (again?) From: Paul Douglas Date: Thu, 9 Jan 92 13:30:47 EST To: greg@hobbes.lbl.gov Subject: Aliasing Hi Greg, You probably don't remember, but we communicated early last year. I was hoping to use radiance to produce a vidoe sequence and you kindly sent me ready-made letters and numbers. Well, the project was shelved, but now it looks like its on again, so I have a question. When I use rpict to generate the radiance image and then convert to a sunraster image diagonal edges are jagged. Pfilt seems not able to reduce this type of aliasing, but I'm guessing that it can be reduced by changing the pixel size, although I'm not sure how. Can you offer any quick suggestions that will reduce the roughness of the object edges?? Thanks much Paul Date: Thu, 9 Jan 92 11:02:19 PST From: greg (Gregory J. Ward) To: douglas@ctr.columbia.edu Subject: Re: Aliasing Hi Paul, The anti-aliasing approach taken in Radiance is a little different from other raytracers inasmuch that you must combine rpict with pfilt in order to arrive at the desired result. This is done by specifying an initial picture resolution for rpict a few times greater than what you want in the final image, then using pfilt to reduce it down. This implements anti-aliasing by oversampling, which is the most effective approach for ray tracing. For example, you might use the following commands to get a 512x512 anti-aliased image: % rpict -x 1024 -y 1024 -vf scene.vp scene.oct > scene.u.pic % pfilt -x /2 -y /2 scene.u.pic > scene.f.pic This would produce a reasonably anti-aliased image in a minimum of time. To get a really fine image, you can increase your sampling rate and use the Gaussian filter option of pfilt, like so: % rpict -x 1536 -y 1536 -vf scene.vp scene.oct > scene.u.pic % pfilt -x /3 -y /3 -r .67 scene.u.pic > scene.f.pic I hope this helps. -Greg ====================================================== LANGUAGE - Radiance input language definitions Date: Mon, 9 Dec 91 15:13:28 -0800 From: mcancill@denali.csc.calpoly.edu (Mike Cancilla) To: GJWard@Csa2.lbl.gov Subject: Radiance Language BNF... Hi, I've been using Radiance for about 1.5 months now, and I think it's really nice. I've still got to ftp 2.0 though. I'm a graduate student here at Cal Poly, and I'm currently in a graduate languages class. My final paper consists of a comparison of 3 ray tracing languages, namely the Radiance language, the language for DKB trace, and an in-house raytracing language. I was wondering if I could get the BNF specs for the Radiance language? Any other info you might deem as helpful would also be helpful. I'll send you a plain text copy of the report if you want it. The report will probably use three different languages to do the same scene, and make a comparison based on ease and features. A technical description of each language, such as sytax and semantics will also be given. I will also focus on any looping structures, math functions, texture availability, and animation features the language may have. Any help would be greatly appreciated. Thanks, Mike Date: Mon, 9 Dec 91 20:08:26 PST From: greg (Gregory J. Ward) To: mcancill@denali.csc.calpoly.edu Subject: Re: Radiance Language BNF... Hi Mike, Excuse my ignorance, but what's a BNF? Personally, I would hesitate even to call the input format of Radiance a language! The only reference I can offer is in ray/doc/ray.1 of the standard distribution. I can send you a PostScript version if you haven't got it already. Version 2.0 does contain a few new BRDF material types, but other than that, the input language looks pretty much the same as 1.4. I would say that most of the sophistication of the Radiance scene description is external, contained in the various object generators and auxiliary files available. The function files in particular use a Turing-equivalent expression language that provides recursive functions as its prime mode of programming as well as access to an extensive math library. If you give me some more details of what you want, or suggestions on how to go about describing a particular scene, I'd be happy to help. -Greg Date: Thu, 12 Dec 91 01:55:21 -0800 From: mcancill@denali.csc.calpoly.edu (Mike Cancilla) To: greg@hobbes.lbl.gov Subject: Re: Radiance Language BNF... Hi Greg, I mailed you a few days ago regarding a BNF for the Radiance scene description language. BNF stands for Backus-Naur (sp) Form. Its a way of describing the syntax of programming languages. The YACC utility, or BISON if you're of the GNU flavor, takes a BNF description of a language and anonyzes the syntax of an input file written the target language. Here's a BNF description for a Raytracing mini-language I wrote for a class project, it's taken from an actual YACC input file: %% program: open_stmt decls stmt close_stmt ; decls: /* nothing */ | decls YOBJ_TYPE YOBJ_VAR ';' | decls YFLOAT YNUM_VAR ';' ; open_stmt: YOPEN_SCENE ';' ; close_stmt: YCLOSE_SCENE ';' ; stmt: /* nothing */ | stmt render_stmt | stmt do_loop | stmt assign | stmt foreach | stmt move | stmt specularity | stmt reflectivity | stmt ambient | stmt color ; render_stmt: YRENDER_SCENE ';' ; do_loop: YDO expr YTIMES '{' stmt '}' ; assign: YNUM_VAR '=' expr ';' ; foreach: YFOREACH YOBJ_TYPE '{' stmt '}' ; move: YMOVE YOBJ_VAR YTO expr ',' expr ',' expr ';' | YMOVE YIT YTO expr ',' expr ',' expr ';' ; specularity: YSPEC YOF YOBJ_VAR YIS expr ';' | YSPEC YOF YIT YIS expr ';' ; reflectivity: YREFL YOF YOBJ_VAR YIS expr ';' | YREFL YOF YIT YIS expr ';' ; ambient: YAMBI YOF YSCENE YIS expr ';' ; color: YCOLOR YOF YOBJ_VAR YIS expr ',' expr ',' expr ';' | YCOLOR YOF YIT YIS expr ',' expr ',' expr ';' ; expr: assign | expr '+' expr | expr '-' expr | expr '*' expr | expr '/' expr | '(' expr ')' | YNUM_VAR | YNUMBER ; %% So there's a BNF. I can probably try and derive some form of one by looking at some example Radiance scene descriptions. I've printed out the Postcript version of the documentation draft for 2.0. Here's a list of topics I'm going to cover in my paper, the three languages I'm going to compare are the language for Rayshade, Radiance, and the Cal Poly raytracing project language, Goober. Topics: Specification of scene parameters - Right or left hand coordinate system - Eyepoint - View direction, etc. Supported primitives Lighting Models - Whats available, and how to specify a certain model via the lang. Textures - How are they handled by the lang. - Can users specify their own? Bit Mapping - How do you apply a bitmap, such as the infamous Mandrill, to an object. Constructive Solid Geometry (CSG) - Is it supported, how does one build objects Looping Constructs and/or Recursive Calls - Are they available, how to use Functions or Procedures - Supported by lang.? User Extensibility - How does the user specify his/her own objects, textures, etc. I decided to throw out the "Let's see how three different languages specify the same scene" idea. They all pretty much looked the same! Plus, with this type of topic scheme, I get to turn in a heavier paper, :-). ANY input on how the Radiance Language fits in to these topics would be appreciated. I'm sure I can look most of it up in the documentation. Thanks a bunch, Mike Date: Thu, 12 Dec 91 12:50:42 PST From: greg (Gregory J. Ward) To: mcancill@denali.csc.calpoly.edu Subject: Re: Radiance Language BNF... Hi Mike, Thanks for explaining a BNR. I figured it was something like that. Since I did not use yacc or similar for the parser, I will try to come up with a BNR independently. The first thing to understand is that altogether there are at least 4 "languages" involved with Radiance scene descriptions: the basic scene input language, the function file language, the data file language and the font language. All except the function file language are exceedingly simple. Scene Input =========== statement: primitive | alias | command | comment primitive: STRING type STRING INTEGER string_args INTEGER integer_args INTEGER real_args alias: STRING alias STRING STRING command: '!' STRING string_args comment: '#' string_args string_args: /* nothing */ | STRING string_args integer_args: /* nothing */ | INTEGER integer_args real_args: /* nothing */ | REAL real_args type: "polygon" | "cone" | "sphere" | "texfunc" | "ring" | "cylinder" | "instance" | "cup" | "bubble" | "tube" | "plastic" | "metal" | "glass" | "trans" | "dielectric" | "interface" | "plasfunc" | "metfunc" | "brightfunc" | "brightdata" | "brighttext" | "colorpict" | "glow" | "source" | "light" | "illum" | "spotlight" | "mirror" | "transfunc" | "BRTDfunc" | "plasdata" | "metdata" | "transdata" | "colorfunc" | "antimatter" | "colordata" | "colortext" | "texdata" | "mixfunc" | "mixdata" | "mixtext" | "prism1" | "prism2" Function File ============= decl: ';' | function_decl ';' | variable_decl ';' function_decl: ID '(' id_list ')' assign_op e1 variable_decl: ID assign_op e1 id_list: ID | ID ',' id_list assign_op: '=' | ':' e1: e1 '+' e2 | e1 '-' e2 | e2 e2: e2 '*' e3 | e2 '/' e3 | e3 e3: e4 '^' e3 | e4 e4: '+' e5 | '-' e5 | e5 e5: '(' e1 ')' | ID | ID '(' id_list ')' | REAL | '$' INTEGER Comments may appear between any two tokens set off by curly braces {}, and may be nested to any level. Data File ========= data: dimensions value_list dimensions: INTEGER dim_list dim_list: dim | dim dim_list dim: REAL REAL INTEGER | '0' '0' INTEGER indep_list indep_list: REAL | REAL indep_list value_list: /* nothing */ | REAL value_list Font File ========= glyph_list: /* nothing */ | glyph glyph_list glyph: INTEGER INTEGER coord_list coord_list: /* nothing */ | INTEGER INTEGER coord_list -------------------------------------------------------------- With regards to your topics, I have the following comments. Specification of scene parameters: - Radiance uses a right-hand coordinate system - The eyepoint and view direction are given as options to the renderers, and can be stored in a separate file Supported primitives: - N-sided polygons - spheres - cones, cylinders, rings - hierarchical instancing for very complex geometries Lighting models: - Completely general - Converter provided for IES luminaire specification Textures: - I break "textures" into two kinds, patterns and textures - Patterns are variation in color, and can be specified as pictures, data or functions in any combination - Textures are perturbations in surface normal, and can be specified in the same ways as patterns - A light source distribution is a pattern Bit Mapping: - Usually given as a picture-type pattern (ie. "colorpict" type) - True bit-maps (ie. 1-bit depth images) may also be produced using a special bit font CSG: - Radiance has an "antimatter" type which supports some rudimentary CSG subtraction, but otherwise we are strictly B-rep Looping constructs or Recursion: - The function file language supports recursion - The "xform" program provides iteration for repeated objects Functions or Procedures: - The function file language supports functions without side effects User extensibility: - The user may create function files, data files and font files, or provide his/her own images for patterns - General bidirectional reflection distribution functions may also be specified in the same way as patterns and textures One additional topic I would like to see evaluated are the degree to which a language encourages the user to produce physically valid models. I realize that this is not the goal of many ray tracers, but I think it should be and it is certainly a foremost consideration in Radiance. If a simulation produces bogus results, what value is it really? Along these lines, you made no mention of the reflectance model chosen or its specification. I think this is at least as important as the geometry. Good luck with your paper. I'd love to see it when it's finished! -Greg ======================================================== NEXT - Radiance compilation on the NeXT From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Help? To: GJWard@Csa2.lbl.gov Date: Sun, 15 Dec 91 16:27:23 EST Greg: I'm trying to compile Radiance version 2.0 on my NeXTstation (but don't let that scare you). I got the older version to compile; but only by nuking all X references and changing malloc() references to something like foo_malloc() due to conflicts with the standard library. Note that I really appreciate your "noX11.help" file in 2.0, but here I am again for 2.0, changing malloc() references. Am I doing something stupid? Every time I try to compile Radiance, I get myriads of: /bin/ld: multiple definitions of symbol _strcmp /bin/ld: multiple definitions of symbol _realloc /bin/ld: multiple definitions of symbol _malloc /bin/ld: multiple definitions of symbol _free messages and cannot continue without prefixing foo_ to everything. Help. -- | John B. Lockhart |.:Did you know that all the water:. .:. .:| | Junior/EE, Georgia Tech |:.between California and Japan would:. .:.| | john%3jane.uucp@mathcs.emory.edu |. fill the Pacific Ocean? .:. .:. .:. .:. | | (Above address NeXTmailable.) | .:. .:. .:--John's Stupid Quotes #64721 .| Date: Mon, 16 Dec 91 08:37:48 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: Re: Help? Hi John, Sorry you are having trouble with my C declarations. This seems to be one of the least well standardized parts of C. Different C compilers seem to insist on totally different declarations. You may find in your cc man page some options to affect the type of declarations the compiler will accept. The default mode seems to be incompatible with old C standards (the ones I use for coding). See if there is a k+r option to the compiler, or something to turn ANSII-type declarations off. It should not be necessary to change the names of the functions! I cannot code to newer standards, because people with the older standards wouldn't be able to cope, whereas the reverse is usually possible. I am assuming that the errors you get are fatal ones. If they are only warnings, you can feel safe to disregard them and the programs should compile anyway. Hope this helps. -Greg From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Re: Help? To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Mon, 16 Dec 91 16:30:49 EST I've tried setting my C compiler to handle non-ANSI, etc, etc, which only causes more problems. :I cannot code to newer standards, because people with the older standards :wouldn't be able to cope, whereas the reverse is usually possible. I can't see how changing references "malloc" to "my_malloc" constitutes coding to a new standard; it wouldn't hurt anyone else and it would sure help people with the newer compilers. :I am assuming that the errors you get are fatal ones. If they are only :warnings, you can feel safe to disregard them and the programs should :compile anyway. They're fatal, of course. I've been relegated to renaming all of your functions to something non-conflicting and then just compiling Radiance using the standard library equivalents. (The brute force method of porting. :) I've been wondering for some time what rview does; I've glanced over the man pages but have been unable to run it (of course) due to the fact that I don't have any of the drivers. I know you don't want to write a NeXT driver for it (seeing as I wouldn't either if I didn't have a NeXT!), but perhaps you could give me a shove in the right direction to write my own, which you might perhaps incorporate into a future Radiance release... -- | John B. Lockhart |..As bad as it is, the U.S. Constitution..| | Junior/EE, Georgia Tech |..is a lot better than what we have now...| | john%3jane.uucp@mathcs.emory.edu |..........................................| | (Above address NeXTmailable.) |...............................--Unknown..| Date: Mon, 16 Dec 91 14:01:12 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: Re: Help? Hi John, I still say that you should not rename the functions. The declarations I have are either for functions in the system library (such as calloc) or for my own replacement for these functions (such as malloc). Renaming functions to foo_malloc and so forth is counter-productive. If the advance declarations I give cause conflict, then remove them rather than renaming them. I hope I am not misunderstanding your problem. Which declarations are in conflict? I do not believe that there is any real conflict involved, only changes in the syntax of advance declarations. Regarding device drivers for rview, I would be delighted if you would write one. You should first read the file driver.h in ray/src/rt, then look at the routines in x11.c for ideas. You may find that the existing driver for NeWS is the closest to the Display PostScript used by the NeXT. -Greg P.S. If you are on the network and willing to make me an account, I will be happy to look at the compile problems myself and see if I can figure a way around them. From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Re: Help? To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Mon, 16 Dec 91 18:38:55 EST Greg: :I still say that you should not rename the functions. The declarations :I have are either for functions in the system library (such as calloc) :or for my own replacement for these functions (such as malloc). Renaming :functions to foo_malloc and so forth is counter-productive. If the :advance declarations I give cause conflict, then remove them rather :than renaming them. : :I hope I am not misunderstanding your problem. Which declarations are :in conflict? I do not believe that there is any real conflict involved, :only changes in the syntax of advance declarations. Ok, lemme explain again: Any function that you declare to replace a standard library function with the same name is causing my *linker* to have screaming fits about "duplicate symbols." It is *not* over- riding the standard library functions with yours, like it should be, and I have no idea how to get it to do so (I've tried just about every command-line option on the man page). In order to alleviate this, I have renamed *your* function-declarations that are in conflict; thus Radiance compiles using the standard library versions rather than its own. This essentially just creates dead code; I could just comment them out and have it work. I was suggesting that you rename your functions to something else to prevent conflicts with the standard library, but as most compiler/linkers override library symbols with your own, it isn't necessary for many machines. The only way for me to get Radiance to use it's own memory allocation routines is to rename them to something that doesn't conflict with the standard library, throughout *all* of Radiance. In other words, it is impossible for me on my NeXT to compile: char *malloc() { return NULL; } main() { malloc(); } because *my* malloc() conflicts with the standard library malloc(). :Regarding device drivers for rview, I would be delighted if you would :write one. You should first read the file driver.h in ray/src/rt, then :look at the routines in x11.c for ideas. You may find that the existing :driver for NeWS is the closest to the Display PostScript used by the NeXT. I'll give it a shot. First I need to get Radiance working! :) :P.S. If you are on the network and willing to make me an account, I will :be happy to look at the compile problems myself and see if I can figure :a way around them. Unfortunately my only InterNet account is a Sequent student account. My 'station is at home connected via a UUCP link. So it goes. I've been standing on soap boxes shouting for student SLIP or PPP (yeah, right), to no avail. If I get some sort of campus computer employment, then one day Georgia Tech might suddenly have a clandestine PPP link (hehehe)... -- | John B. Lockhart |..What kind of dream was this,............| | Junior/EE, Georgia Tech |..so easy to destroy?.....................| | john%3jane.uucp@mathcs.emory.edu |..........................................| | (Above address NeXTmailable.) |.........................--Pet Shop Boys..| Date: Tue, 17 Dec 91 09:36:23 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: Re: Help? Hi John, I guess I had no idea of the magnitude of the problem. I've NEVER heard of a linker that refused to override library definitions. That goes against a very fundamental law of libraries. The reason for having library replacements is efficiency. Sometimes the library functions do not perform well, sometimes they are unreliable on different architectures (or missing entirely), and sometimes there is something particular about the program that makes the library implementations less efficient than they could be. A simple example of where a library function is overridden is in my definition of strcmp() (in common/savestr.c). Most library's strcmp's do not compare the pointer values they are handed for equality because this case never happens. But when using savestr(), many strings will end up pointing to the same address so comparing pointers avoids having to test for equal bytes through the length of the string. The new implementation of savestr does the same job as the library version, but in the special case of equal pointers it does it much faster. Similarly, I wrote my own malloc() routines to work in consort with a variation called bmalloc() that allocates untagged blocks of memory. Most libraries do a decent job nowadays with malloc (although there are some notable exceptions), but if I use the library malloc, then bmalloc does not work as efficiently. (And bmalloc is what I do most of my big allocations with.) Sure, removing my implementations of the library functions will work, but with some loss in efficiency. I do not want to rename all references myself, because some of these routines I use other places without my particular implementations and I don't want to have to carry all my routines with them. It's very inconvenient to call mystrcmp() everywhere I would normally use strcmp() when I may or may not be linking to my own library containing mystrcmp. I also cannot call both my library function and the standard one because in some cases they are incompatible. I know there are implementations of malloc that fail catastrophically if you make a call to sbrk() (as mymalloc would) inbetween calls to it. In conclusion, I think you are taking the only possible course of action by renaming or removing my implementations of the library routines. Do not rename the calls, though. Just discard my replacements. I recommend contacting the folks at NeXT to find out what is going on with their linker. Those guys are really out on a limb or out to lunch or something. -Greg From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Re: Help? To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Tue, 17 Dec 91 14:51:38 EST Greg: :I guess I had no idea of the magnitude of the problem. I've NEVER heard :of a linker that refused to override library definitions. That goes :against a very fundamental law of libraries. I agree. I could be missing something; in any case I'm certainly very frustrated. I think I'll post that little program I gave you onto comp.sys.next.programmer and hope someone there says "-$, stupid!" or something... I completely understand your replacement of the standard library functions; I was never in disagreement with that. Also, using the same names allows you to use either your version or the standard version by merely including a library or not as an arg to the compiler. My only problem was that this wreaks havoc with my system, which is not as it should be. :In conclusion, I think you are taking the only possible course of action :by renaming or removing my implementations of the library routines. Do :not rename the calls, though. Just discard my replacements. That's what I've done. Seems to be working well enough now. :I recommend contacting the folks at NeXT to find out what is going on :with their linker. Those guys are really out on a limb or out to lunch :or something. To tell you the truth, I don't think it's NeXT--my suspicion is that it's GNU. I had a friend compile my little program on a SPARC with and without the GNU CC compiler and he got the same message when he used GNU; it worked fine using Sun's CC. I will take your suggestion though and try to find out what (if anything) I'm doing wrong... Thanks. --John PS: I'll proabaly be in touch sooner or later about NeXT driver woes or (perhaps) the solution to the multiple-symbol-definitions problem so that you may add a NeXT line to your makeall script. -- | John B. Lockhart |..Bow down before the one you serve.......| | Junior/EE, Georgia Tech |..You're going to get what you deserve....| | john%3jane.uucp@mathcs.emory.edu |..........................................| | (Above address NeXTmailable.) |.......................--Nine Inch Nails..| From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Multiple Symbol Madness To: GJWard@Csa2.lbl.gov Date: Thu, 19 Dec 91 1:15:45 EST Greg: You're going to *love* this. I posted the message about the linker having real fits about multiple symbols onto comp.sys.next.programmer, and pretty much got flamed for not noticing this until now. Seems I've revived an old thread. (Woe be unto me, for I have sinned!) Someone did, however, mail me the following solution which, though kludgy, hackish, and ugly besides, works: cc -O -Dmalloc=my_malloc -o prog prog.c Yes, that's right. Just tell it to redefine matters and let the preprocessor do the search/replace work for you. This can be added as args in your makeall script at least until something better comes along. "-Dmalloc=my_malloc -Dstrcmp=my_strcmp ..." I don't like it any more than you do. --John -- | John B. Lockhart |..I don't think we're in..................| | Junior/EE, Georgia Tech |..Kansas anymore, Toto!...................| | john%3jane.uucp@mathcs.emory.edu |..........................................| | (Above address NeXTmailable.) |...............................--Dorothy..| Date: Thu, 19 Dec 91 08:51:54 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: Re: Multiple Symbol Madness Hi John, It's not a pretty solution, but at least it's a solution! I suppose that one of us should have thought of that... Anyway, I am prepared to add it to the makeall script. Rather than trying to remember what standard library functions I have redefined, do you have a list that you could share with me? The only ones I can think of offhand are malloc and strcmp. Thanks a million! -Greg From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Re: Multiple Symbol Madness To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Thu, 19 Dec 91 15:21:30 EST Greg: :It's not a pretty solution, but at least it's a solution! I suppose that :one of us should have thought of that... No it isn't. Yes we should've. Perhaps we were thinking along the lines of horror at the fact that the linker wasn't doing things right and "there must be a command line switch" rather than just looking for kludgy solutions. :Anyway, I am prepared to add it to the makeall script. Rather than trying :to remember what standard library functions I have redefined, do you have :a list that you could share with me? The only ones I can think of offhand :are malloc and strcmp. Ok, the ones I've got a "Z" in front of are malloc(), realloc(), free(), and strcmp(). To assist you with adding a NeXT option: It is of course BSD-derived, not RISC, and has never heard of X11. I think your noX11help isn't entirely complete--some other Makefiles needed patching to remove X stuff, and I can't quite remember which ones. I had only two more fatal errors in Radiance-in-general that I can think of: You prototype atof() somewhere and I think that's a macro; this caused the compiler to have screaming fits--all that needs be done is remove the prototype. Also, in one file you define a macro: #define CTRL(c) ('c'-'@') this *always* compiles to ('c'-'@') on ANSI preprocessors, regardless of the argument. Thus in your switch statement later in the same file, the compiler barfs at duplicated case values. I merely replaced it with things like ('R'-'@'), etc, and everything was o.k. Finally, I have not been able to get the AutoCad-->Radiance converter and the TIFF library to compile. The AutoCad-->Radiance bit has to do with malloc() again for some reason, and the TIFF library is just a pain. This is somewhat disturbing as TIFF is the rasterfile of choice on NeXT; there are library calls to write pixmaps as TIFFs builtin to NeXT. But a machine-independent lib won't compile. Arrrrg. But with the patches I've listed I've been able to trace things in rpict. Rview of course is useless. I realize you can't make all of these patches and still have Radiance compile well on other machines--perhaps a machine-dependent note file? Who knows. -- | John B. Lockhart |..I want to hear you scream...............| | Junior/EE, Georgia Tech |..--Play some rap music...................| | john%3jane.uucp@mathcs.emory.edu |..........................................| | (Above address NeXTmailable.) |....................--The Last Boy Scout..| Date: Thu, 19 Dec 91 14:16:10 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: Re: Multiple Symbol Madness Thanks, John, for all your help. I will incorporate as many changes as I can figure out how to do in a machine-independent way. Thanks especially for spotting the macro failures -- I knew nothing about those before! I am sorry you weren't able to figure out the TIFF library or dxfcvt. Unfortunately, both were written by others and so I have limited ability to fix them. -Greg Date: Thu, 19 Dec 91 14:21:35 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: another thing... Did you test this -Dmalloc=Zmalloc thing already? As I said before, some malloc's are incompatible with outside calls to sbrk. If the NeXT has such a malloc, then this redefinition will cause troubles for sure. From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Oops! To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Thu, 19 Dec 91 19:19:48 EST :Thanks, John, for all your help. I will incorporate as many changes as I :can figure out how to do in a machine-independent way. Thanks especially :for spotting the macro failures -- I knew nothing about those before! Don't mention it. Anyone putting out free software of Radiance's quality deserves all the help he can get. Note that you can add some special-cased code if necessary with #ifdef NeXT ... #endif 'cause NeXT is defined in the compiler. That may not be necessary, however. :I am sorry you weren't able to figure out the TIFF library or dxfcvt. :Unfortunately, both were written by others and so I have limited ability :to fix them. No big deal. I don't use AutoCAD, and I can convert from Sun Raster to TIFF anyway. :Did you test this -Dmalloc=Zmalloc thing already? As I said before, some :malloc's are incompatible with outside calls to sbrk. If the NeXT has :such a malloc, then this redefinition will cause troubles for sure. Oops. What was I thinkin'!?? I tested it on that sample program but didn't test it on Radiance due to some subconcious fear of compiling for an hour. I looked at your malloc() stuffs, though, and noted your use of sbrk() like you said (I've never used it before but assume it to be crucial in bypassing malloc())... then checked the NeXT man page on a hunch: % man sbrk BRK(2) UNIX Programmer's Manual BRK(2) NAME brk, sbrk - change data segment size The UNIX system calls brk and sbrk are not supported on the NeXT computer. Ain't that just dandy? At this point I assumed it wasn't worth the bother of recompiling Radiance. Since it's working fine with the standard library malloc(), and that's what I used in v1.4, why don't we call it even and just special-case your malloc() defs out of the Makefile or something, then just add the -Dstrcmp=my_strcmp for strsave? (Such a mess. If you think this is bad, though, try porting an IBM Pascal program to a Mac C program that uses the GUI.) -- | John B. Lockhart |: Then again: : : : : : : : : : : : : : : | | Junior/EE, Georgia Tech | :We could all be WRONG: : : : : : : : : :| | john%3jane.uucp@mathcs.emory.edu |: (Wouldn't be the first time.) : : : : : | | (Above address NeXTmailable.) | : : : : : : : : : : : --Laurie Anderson :| Date: Thu, 19 Dec 91 16:49:54 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: Re: Oops! OK, thanks for checking. I guess NeXT users will just have to make some additional Makefile changes -- namely, deleting malloc.o from the Makefile's in src/ot and src/rt. Date: Thu, 19 Dec 91 17:02:36 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Subject: Re: Oops! Actually, it wasn't so hard to make the deletions automatically. NeXT users will still have to deal with the X11 shortcomings, at least until we write a driver for Display PostScript... From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Re: Oops! To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Fri, 20 Dec 91 1:05:52 EST :Actually, it wasn't so hard to make the deletions automatically. That's good! :NeXT users :will still have to deal with the X11 shortcomings [...] Did you make the non-X11 compilation automatic, too? Isn't that just the same sorta thing as removing malloc.o compilation except for several files? :at least until we write :a driver for Display PostScript... Well, I'll get to work on that. I've scanned over (not parsed :) the code for existing drivers and am curious--do you get the entire bitmap on the screen via rectangle painting (e.g.: bunches of small rectangles)? Or did I miss something in my once-over--I only saw things like open/close/clear/paint-rect etc. 'cause if that's all there is then it should be a five-minute hack once I figure out how the hell one gets a normal bonafide window without having a normal bonafide Objective-C app running. Oh, one last question: Can you route "command-line" i/o to the terminal you started rview from and just have a floating window? (Note that this should be moot if I can figure out what I need to figure out.) -- | John B. Lockhart |......The Georgia......| ___ |.....| | Junior/EE, Georgia Tech |.....Institute of.....| | _____ |.....| | john%3jane.uucp@mathcs.emory.edu |......Technology:......| |___|| |.....| | (Above address NeXTmailable.) |.....We don't mold!....| | |.....| Date: Fri, 20 Dec 91 08:34:01 PST From: greg (Gregory J. Ward) To: john%3jane.UUCP@mathcs.emory.edu Hi John, I didn't want to take out the X11 stuff automatically for the NeXT, just in case NeXT should support X11 in the future. I should probably have a special question about X11 support, though, and make the changes based on the presence or absence of certain files. It all gets so complicated... Yes, the picture is drawn by rview soley with paintrect calls. I tried to keep the driver interface as bone-headed simple as possible. The tricky part is usually getting input while drawing. Rview has to be notified (using the inpready member) as soon as input is available or response time suffers. It should be possible to get this from the standard input, although I have never done it this way. Keep in mind that rview should run continuously until "interrupted" by user input. This mode of interaction is not supported easily by all window systems, but there is usually a way. A specific problem we have had with our NeWS driver is that the rectangles it paints don't really mesh nicely. Because PostScript uses its own device-independent coordinate system, there is some inaccuracy in exactly which pixels are drawn by a paintrect call, and the result is a lot of ugly seams everywhere. If you have this problem and find a solution, I would be most curious to hear about it. Good luck, and let me know how I can help! -Greg From: john%3jane.UUCP@mathcs.emory.edu (John B. Lockhart) Subject: Re: Oops! To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Fri, 20 Dec 91 13:24:27 EST Yo, Greg. :I didn't want to take out the X11 stuff automatically for the NeXT, just in :case NeXT should support X11 in the future. I should probably have a :special question about X11 support, though, and make the changes based :on the presence or absence of certain files. It all gets so complicated... That's what I wanted. Heaven forbid you make it just a NeXT switch-- I mean ask if you have X then it compiles or doesn't compile things based on that. I realize it is somewhat complicated--Make isn't quite built for all the extremes of compatibility you're pushing it to. :Yes, the picture is drawn by rview soley with paintrect calls. I tried to Makes sense. :Keep in mind that rview should :run continuously until "interrupted" by user input. This mode of interaction :is not supported easily by all window systems, but there is usually a way. I guess I'm still foggy on what you're doing because I've never seen rview run (hell of a disadvantage, writing a driver for something you have to write the driver for to make it work). I'll just study the NeWS code real well then patch a NeXT hack to get a feel for it then refine things. :A specific problem we have had with our NeWS driver is that the rectangles :it paints don't really mesh nicely. Because PostScript uses its own :device-independent coordinate system, there is some inaccuracy in exactly :which pixels are drawn by a paintrect call, and the result is a lot of :ugly seams everywhere. If you have this problem and find a solution, :I would be most curious to hear about it. I had a feeling you'd say that. I've run into this problem before with PostScript resolution nastiness while trying to make a radar screen for a game I'm working on--seems that 1/72" screen rectangles are sometimes one pixel and sometimes two pixels wide, depending on where you draw them! Can you say "averaging?" I knew you could. That looks really good in terms of printed output and smooth transitions but it's real hell for the sort of thing we're trying to do. I have an idea for a kludge: Since I imagine your rview makes the rect call only if it has to draw a "pixel," why not make the driver allocate a bitmap, attach it to a window, then set pixels in it on the fly while occasionally flushing it to the window? Though sortof aesthetically displeasing, it should work with a reasonable amount of speed and give normal picture quality, since PostScript knows about bitmaps... And as an added bonus I think that'd make it really easy to put the picture in a resizable, scrolling window (instead of just lobbing a 1024 x 768 over everything). I'll look into that. -- | John B. Lockhart |: : : : : : : : Alien III : : : : : : : : | | Junior/EE, Georgia Tech | : : : : : : : : : : : : : : : : : : : : :| | john%3jane.uucp@mathcs.emory.edu |: : : : : : The bitch is back!: : : : : : | | (Above address NeXTmailable.) | : : : : Coming Memorial Day 1992: : : : :| ~s Radiance Digest, Vol. 2, No. 10 Dear Radiance users: Here is a long overdue installment of the Radiance Digest, your peek at conversations between me and users like yourself. If you don't consider yourself a "user;" you have kicked the habit and would like to kick the mailing list as well, write to me at: radiance-request@hobbes.lbl.gov Please do not respond directly to this mailing, as we don't have a proper list server. (And please don't ask why not.) Here is the list of topics covered in this edition: GENSKY AND COLOR How to color the sky and ground ANIMATION Steps towards walk-through animations SETTINGS AND ACCURACY How renderer settings map to accuracy MAPPING BARK TO BRANCHES How patterns map to cylinders MUNGING PICTURE HEADERS How to change the header on a picture MODELING A LASER How to model laser light sources FALSECOLOR IN BLACK AND WHITE Getting B&W display from falsecolor COMPILE SWITCHES What are all those things in makeall? TRANS PARAMETERS Making sense of trans primitive SOLAR ECLIPSE Sampling a solar penumbra RETROREFLECTORS Modeling SAE reflectors In other news, I plan very shortly to release version 3.0 of Radiance, which includes (among other things) a new type for single-scatter modeling of participating media, and an animation control program called ranimate. I decided to call it 3.0 instead of 2.6 since I made so many major revisions and it's been nearly a year since 2.5 was released. Watch this list for an announcement. Happy rendering! -Greg ====================================================================== GENSKY AND COLOR From HXZZDUBIELJ@cluster.north-london.ac.uk Wed Oct 25 10:09:15 1995 Date: Wed, 25 Oct 95 17:06 BST From: HXZZDUBIELJ@cluster.north-london.ac.uk To: GREG Subject: Re: Floodlit Radiance pictures To: Greg Ward From: Jo Dubiel and Aris Tsangrassoulis Hi Greg, You may remember that I was going to send you some pics via xfer, and a couple of papers. I have not forgotten, just not got round to it yet. Aris is visiting us from the University of Athens. He seems to be the only person in Greece who is using RADIANCE at the moment. We are both working on a new project here, looking at daylighting under sunny skies, and we will use RADIANCE simulations. This has brought up a couple of questions: (1) What is the difference between using the '-g' option on gensky, and using a ground of the same reflectivity as follows: skyfunc glow ground_glow 0 0 4 0.2 0.2 0.2 0 (2) If using the 'glow' material type for the sky, how do we arrive at correct values for the r,g,b ? Is the magnitude of these values important? What r,g,b values would you recommend for the colours of different sky types? skyfunc glow skyglow 0 0 4 0.9 0.9 1 0 We hope to hear from you soon, Aris & Jo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From: j.dubiel@unl.ac.uk Jo Dubiel Low Energy Architecture Research Unit University of North London Spring House 6 - 40 Holloway Road London N7 8JL Tel: 0171- 753- 7006 Fax: 0171- 753- 5780 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From greg Wed Oct 25 10:24:08 1995 Date: Wed, 25 Oct 95 10:23:33 PDT From: greg (Gregory J. Ward) To: HXZZDUBIELJ@cluster.north-london.ac.uk Subject: Re: Floodlit Radiance pictures Hi Aris and Jo, Working kind of late, aren't we? You should use the -g option of gensky rather than reducing the color param's on your glow material. If you want to add color to either your ground or your sky, make sure that the luminance is still 1 so that when you combine it with the skyfunc pattern, the total luminance will be correct. To do this, use the following formula for luminance from (r,g,b): l = .265*r + .670*g + .065*b (This is in units of .00559 cd/m^2, but never mind that.) Then, say you want your color (hue,saturation) to be the same as (r,g,b)=(.5,.1,.05) -- a nice brown. The values you would use for your glow would be these divided by the luminance as defined above, i.e. divided by (.256*.5 + .670*.1 + .065*.05), which is .198. Thus, you would use: skyfunc glow groundglow 0 0 4 2.52 .504 .252 0 Apply the same type of calculation to the sky. I don't know the best color to use for anything, unfortunately. You'll just have to pick your own preference for that. Beware highly saturated colors, as they tend to result in odd-looking interiors with strong color casts on the floor and ceiling. Good luck! -Greg [And here's the question once more from someone else -- I can't believe I forgot that I had answered it before, but I've probably done it before and will probably do it again....] From jst_ibp@mars.IPA.FhG.de Thu May 23 00:43:59 1996 Date: Thu, 23 May 1996 09:43:50 EDT From: jst_ibp@mars To: %mx%"greg@hobbes.lbl.gov"@mars Subject: Gensky problem Dear Greg, We were asked the following questions by Dr. Hans Schmidt, Siemens, which we are not able to answer and therefore would very much appreciate your help. Probably the answers are very straight forward for you... H. Schmidt noticed that the colour used for the sky influences the luminances on the pictures and asked the following questions: 1) How do the RGB values for sky_glow and ground_glow have to be set to get the "original" CIE sky luminances and realistic values for the ground (R=G=B=1 ?) 2) How does the -g option influence the result? Are the values of skyfunc in the "lower" hemisphere weighted by using this factor? 3) R=G=B=1 results in a sky which is much too bright. Dark blue (for clear sky) and dark grey (for overcast sky) seems to be much better. What can be done to get a realistic visualization and too be correct according to CIE definition at the same time? Thanks for your help. Best regards, Juergen ********************************************************************* Juergen Stoffel Fraunhofer Institute of Building Physics, Division of Heat Technology Nobelstrasse 12, 70569 Stuttgart, Germany Tel +49 711 970 3327, Fax +49 711 970 3399, e-mail: jst@ibp.fhg.de ********************************************************************* From greg Thu May 23 10:27:36 1996 Date: Thu, 23 May 96 10:27:12 PDT From: greg (Gregory J. Ward) To: jst@ibp.fhg.de, jst_ibp@mars Subject: Re: Gensky problem Hi Juergen, > 1) How do the RGB values for sky_glow and ground_glow have to be set > to get the "original" CIE sky luminances and realistic values for the > ground (R=G=B=1 ?) For a blue sky, I recommend the following glow material: skyfunc glow sky_glow 0 0 4 0.986 0.986 1.205 sky_glow source sky 0 0 4 0 0 1 180 This will give a slight blue tint without affecting the luminosity. I computed this using the formula: grey = .265*R + .670*G + .065*B > 2) How does the -g option influence the result? Are the values of skyfunc > in the "lower" hemisphere weighted by using this factor? Yes, gensky's -g option does affect the luminance of the lower hemisphere. You should therefore make sure that it agrees with the grey value of the color of the ground plane, if any. If your ground plane is not grey, you should also use the above formula to compute an appropriate glow source for the ground. For example, let's say that your ground plane has the color (R=.4, G=.3, B=.1). Then, your -g option should be set to (.265*.4 + .670*.3 + .065*.1) = .3135, and your glow material should have these values divided by the grey value, i.e., (R=.4/.3135, G=.3/.3135, B=.1/.3135), thus: !gensky {date} +s -g .3135 {other options} void plastic ground_mat 0 0 5 .4 .3 .1 0 0 ground_mat ring ground_plane 0 0 8 0 0 0 0 0 1 0 100 skyfunc glow ground_glow 0 0 4 1.28 .957 .319 0 ground_glow source ground 0 0 4 0 0 -1 180 > 3) R=G=B=1 results in a sky which is much too bright. Dark blue (for > clear sky) and dark grey (for overcast sky) seems to be much better. > What can be done to get a realistic visualization and too be correct > according to CIE definition at the same time? This is a dynamic range problem, and there is little one can do about it. The same problem occurs with an ordinary camera -- one simply cannot expose both the inside and outside simultaneously and get decent results. When a person is in the acutal environment, their pupils and retina adjust as they look out the window then back at their desk. On a picture or a computer monitor, however, we cannot easily mimic this. The best you can do is to readjust the exposure in ximage using the '=' or '@' commands. Others have substituted a fake sky out the window to overcome this problem, but that seems like a lot of work and a bit of a cheat if you ask me. Hope this helps. -Greg From MFKPGMA@fs1.ar.man.ac.uk Fri Nov 24 03:07:17 1995 From: Mohd Hamdan Ahmad Organization: Planning and Architecture. To: greg@hobbes.lbl.gov Date: Fri, 24 Nov 1995 11:04:20 GMT0BST Subject: RADIANCE Dear Greg, I am using Radiance to get DF results for my scenes. I would like to ask if there is a way by which I can get rtrace to generate periodic reports (similar to "rad") while I run my simulations. The reason behind is due to the long simulation time needed to do an rtrace to get my illuminance measurements. My rtrace script is as follows: rtrace -h -i -ds 0.02 -dt 0 -dc 1 -ab 2 -aa .15 -ad 512 -ar 128 -as 256 \ *.oct < points | rcalc -e '$1=($1*0.3+$2*0.59+$3*0.11)/(PI*8.9)' \ >> df_file Another question I would like to ask refers to the gensky command. I am currently using the Standard CIE Overcast sky. if I use the following rad file: !gensky 01 01 13 -c -a 52.3 -o 0 -m 0 -B 27.93 skyfunc glow skyglow 0 0 4 1 1 1 0 skyglow source sky 0 0 4 0 0 1 180 Is it safe to assume that this will generate an even sky distribution. The reason I ask is because I've tested this sky on a symmetrical model ie. a box) with no roof and I get DF results that are not the same on opposite ends of my model (I have used the same material reflective properties for all my surfaces). Is there anyway I can improve on the results or is this really how RADIANCE behaves. If my illuminance readings are not the same on opposite ends of my room, then should I assume that the -c (CIE overcast sky) parameter for gensky takes into account a sun source at an angle? Please enlighten me on this. Cheers, Hamdan Ahmed email: mfkpgma@orpheus.man.ac.uk From greg Mon Nov 27 11:16:35 1995 Date: Mon, 27 Nov 95 11:16:02 PST From: greg (Gregory J. Ward) To: MFKPGMA@fs1.ar.man.ac.uk Subject: Re: RADIANCE Hello Hamdan, The only way to know how far rtrace has progressed is to use the -x option to cause it to flush its output after each point. Then, use the -u option to rcalc to make it flush its buffer, also. The number of lines in the output file (which can be determined simply with the "wc" program) will tell you how many of the total points have been computed. Your command would change to: rtrace -x 1 -h -i -ds 0.02 -dt 0 -dc 1 -ab 2 -aa .15 -ad 512 -ar 128 -as 256 \ *.oct < points | rcalc -u -e '$1=($1*0.3+$2*0.59+$3*0.11)/(PI*8.9)' \ >> df_file I hope you realize also the difference between the rtrace -i and -I options. The former, which you are using, sends rays from the point in the direction given to intersect a surface and compute the irradiance at the intersected point. If, instead, you want to compute the irradiance at a specific point, which may not lie on any surface (e.g., the workplane), the you should use the -I option instead. The gensky command you are using will compute a symmetric CIE overcast sky distribution. It is not uniform, because it changes as a function of altitude, but it is not a function of azimuthal direction, so your daylight factors on opposite sides of a symmetric room should match. That they don't is not unusual, but it may indicate that more samples (-ad parameter) are required for better accuracy. Don't expect the values to ever match exactly, however, since Monte Carlo calculations always have some random error associated with them (as opposed to a systematic error, which may match). I hope this helps. -Greg ===================================================================== ANIMATION From takehiko@MIT.EDU Tue Dec 12 06:55:01 1995 To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: Question 2. Date: Tue, 12 Dec 1995 09:54:23 EST From: Takehiko Nagakura Mr. Ward; Thank you for the tips about rendering parameters. I tried to boost up -ad and -as values and am getting better image qualities now (takes a moment on my workstation, though). I am starting to work on a project to visualize interior space of famous but unbuilt architecture from the early 20 century now and interested in making animations. I have a few experience of making CG film of architecutre in Alias and Wavefront, but for this interior project, I would love to try with Radiance. Phil Thompson showed me a few of his animated work that he did with Radiance before, and I am wondering if there is any utility to do animation that you or other people made? I found one on the Web made by New Zealand people ( http://archpropplan.auckland.ac.nz/Graphics/radiance/ra_sunrun.html) , but if there is any other utilities for doing walk-through or lighting-animation, etc, will you let me know where I should look into? (I could not find any on your radiance manual. I am currently running radiance2.5, and maybe something is in 2.6?) I have some experience in cshell and c programming, and I would love to build myeslf some thing that I can contribute, but I would also like to know what is available now. Thank you for taking time for reading this again. Takehiko Nagakura (Assist. Prof. of Architecture, MIT) From greg Tue Dec 12 09:59:19 1995 Date: Tue, 12 Dec 95 09:59:05 PST From: greg (Gregory J. Ward) To: takehiko@MIT.EDU Subject: Re: Question 2. Hello Prof. Nagakura, Unfortunately, there are no general animation tools in Radiance. I have done lighting and walk-through animations myself, and I find it is always better to write the scripts to do them (using the C-shell or Tcl or Pearl or whatever) and customize them each time. There are simply too many different optimizations and different ways of doing things to have a general utility. I have tried and failed to design one several times. You should study the utility programs cnt and rcalc, and the image inter- polation program pinterp, which uses the -z option of rpict. Other rpict options you should know are -S and -o. Check the man page for explanations. My usual method to create a walk-through animation is to select keyframe points using rview and the "view key.vf -t #seconds" command to add a new keyframe with a separation (in seconds) from the previous view. Then, I use the following script to take this file and create a .cal file appropriate for use with rcalc and the "spline.cal" file, which can be found in the ray/src/cal/cal directory. Rcalc may be used with cnt to generate a sequence of evenly-spaced views as input to rpict. These steps again are: 1. Run rview, and use the "v key.vf -t #secs" command to add views to a keyframe file. 2. Use the attached script, mkspline, to generate a .cal file for use with rcalc: % mkspline key.vf > key.cal 3. Use cnt with rcalc, key.cal and spline.cal to render a sequence of frames. You may want to do this in low resolution first to check everything: % cnt 200 | rcalc -f spline.cal -f key.cal -e '5=$1/Ttot' \ -o view.fmt | rpict [rendering options] \ -S 1 -o anim%03d.pic -z anim%03d.zbf scene.oct & (The file view.fmt contains something like this: VIEW= -vp ${s(Px)} ${s(Py)} ${s(Pz)} -vd ${s(Dx)} ${s(Dy)} ${s(Dz)} ) 4. Use the generated anim*.pic and anim*.zbf with pinterp to interpolate frames inbetween for a smoother animation. The same command issued in step 3 above may be repeated on different machines sharing the filesystem over NFS. Each process will render the next unrendered frame in the sequence. If you are performing an interreflection calculation (i.e., -ab >=1), be sure to use the -af option to rpict so that values are shared between renderings and processes. Also note that the views put out by rcalc AMMEND the initial view you may set on the rpict command line. View parameters that stay the same, such as view type, resolution and probably up vector and size, need only be given once on the command line. I hope this is enough to get you started. I'm sorry that I don't have a canned solution to provide you. Maybe someday, with help from users such as you, I can gain enough insight into this problem to devise a general approach. -Greg ----------------------------------------------- mkspline #!/bin/csh -f # # Make a .cal file for use with spline.cal from a set of keyframes # if ( $#argv != 1 ) then echo Usage: $0 viewfile exit 1 endif cat <<_EOF_ { Keyframe file created by $0 from view file "$1" `date` } _EOF_ rcalc -i 'rview -vtv -vp ${Px} ${Py} ${Pz} -vd ${Dx} ${Dy} ${Dz} -vu ${Ux} ${Uy} ${Uz} -vh ${H} -vv ${V} -vs 0 -vl 0 -vo 0 -va 0 -t ${T}'\ -o '${recno} ${Px} ${Py} ${Pz} ${Dx} ${Dy} ${Dz} ${Ux} ${Uy} ${Uz} ${H} ${V} ${T}'\ $1 | tabfunc Px Py Pz Dx Dy Dz Ux Uy Uz H V T [I have since written a new program, called ranimate, which handles many of the nasty details of running long animations. It will be included in the 2.6 release, due any day now. -G] ====================================================================== SETTINGS AND ACCURACY From karner@fcggsg02.icg.tu-graz.ac.at Mon Jan 29 13:28:25 1996 Return-Path: Date: Mon, 29 Jan 1996 22:17:43 +0100 (MET) From: Konrad Karner To: GJWard@lbl.gov Subject: Accuracy! Status: R Hi Greg, Thank you for your quick answer of my last question. I'm using Radiance (rpict and rtrace) to calculate luminance values respectively illuminance values in my test scene. I used your recommended rendering options (min, fast, accurate) to calculate the luminance values for the same point and I saw that there occur large differences. So I'm interested how close are the value of the accurate simulation to the maximum option. For example, I got 30cd/m2 for the min. option 38cd/m2 for the fast option and 88cd/m2 for the accurate option I had to stop the calculation with the max. option because it took to much time (I got no results after 3 days on a SGI Power Challenge). I also studied the convergence of the algorithm by calculating the luminance values in 11 equally steps between the fast and the accurate option by interpolating the parameters. The luminance values of 50% of the points in the last steps exhibits no convergent behavior. Do you have any data of the accuracy of the algorithm? Do you know how large are the errors in the min, fast, accurate and max option could be? Thanks, Konrad |************************************************************| | Konrad F. KARNER | | Institute for Computer Graphics | | Graz University of Technology | | -----------------------------------------------------------| | Muenzgrabenstrasse 11 | | 8010 Graz, Austria | | Tel.: (+316) 873-5022 Fax: (+316) 873-5050 | | E-mail: karner@icg.tu-graz.ac.at | | WWW: http://www.icg.tu-graz.ac.at/ | |____________________________________________________________| From greg Mon Jan 29 13:37:37 1996 Date: Mon, 29 Jan 96 13:37:15 PST From: greg (Gregory J. Ward) To: karner@icg.tu-graz.ac.at Subject: Re: Accuracy! Hi Konrad, The accuracy of the calculation depends a lot on your scene, so there is no direct correlation between most of the rendering options and a percentage accuracy value. This is an unfortunate but necessary characteristic of this approach to lighting calculation. The only method that can truly claim convergence under arbitrary conditions is naive Monte Carlo, which would never finish in our lifetimes for most scenes. My best advice is to employ the "rad" program to set options for you based on qualitative scene characteristics. Using "rad", you can expect about 20% pixel accuracy for "low" quality, 10% pixel accuracy for "medium" quality, and 5% pixel accuracy for "high" quality settings. There will be exceptions, of course. The kind of bad convergence you are seeing would seem to indicate that you are not choosing an appropriate value for the -av setting, which is very important as an initial guess of average scene radiance. Without it, you are relying entirely on Radiance to follow every bit of light around your scene, through all its random bounces ad infinitum. It doesn't do that so well. The main parameter that will affect convergence in simple scenes is -ab, followed by -ad. -Greg P.S. Radiance has been validated against other calculations and model measurements, and is (at least potentially) quite accurate. The problem is applying it properly, which can be difficult. =================================================================== MAPPING BARK TO BRANCHES From cs4gp6ar@maccs.dcss.mcmaster.ca Thu Feb 29 12:30:59 1996 Date: Thu, 29 Feb 1996 15:27:23 -0500 (EST) From: Janik ME To: greg@hobbes.lbl.gov Subject: radiance Hello there, We are working on a project at McMaster University with Dr.Jones to generate various trees using L-systems. We would like to map bark and leaf textures onto them, but we are having a difficult time finding examples or methods in the available literature. If you could please tell us where to look for such examples, or send us an example we would greatly appreciate it. Thanks Marta Janik From greg Thu Feb 29 12:41:01 1996 Date: Thu, 29 Feb 96 12:39:28 PST From: greg (Gregory J. Ward) To: cs4gp6ar@escher.dcss.McMaster.CA Subject: Re: radiance Hi Marta, I don't have a really good example handy, but for mapping onto branches and the like, I've used a cylindrical mapping, that looks like this: void colorpict bark_pat 7 red green blue pinebark.pic cyl.cal cyl_match_u cyl_match_v 0 2 1.5225225 .25 The first real argument is the aspect ratio of the pattern's picture (pinebark.pic in this case), and the second real argument is the unit scale for this picture (larger values yield larger tiles). This pattern must then be transformed (along with the branch) to the appropriate position. The cylindrical mapping in cyl.cal assume you have a cylinder with unit radius whose axis is along the Z coordinate axis. You may use a cone with a slight graduation instead of a cylinder and you won't notice much difference. I hope this helps. -Greg ==================================================================== MUNGING PICTURE HEADERS From courret@lesosun2.epfl.ch Fri Mar 1 02:32:30 1996 Return-Path: Date: Fri, 1 Mar 96 11:30:22 +0100 From: courret@lesosun2.epfl.ch (Courret) To: greg@hobbes.lbl.gov Subject: findglare Status: RO Hi greg, I would like to compute the Guth's comfort index for a picture obtained by pcomb. I have seen in the man page of findglare that, this command will not work on pictures processed by pcomb. Is there a way to go around this limitation? Or, would it be possible to extend the capability of findglare? Note: I have tried also to add the view information lost by pcomb using the option -vf view.vp in the commande that calls findglare, but without success... Regards, -------------------------------------------------------- Mr Gilles COURRET Laboratoire d'Energie SOlaire et de Physique du Bbtiment ITB/DA Ecole Polytechnique Fidirale de Lausanne 1015 Lausanne Suisse Til: xx.21.693.55.53 Fax: xx.21.693.55.50 e-mail:courret@lesosun1.epfl.ch From greg Fri Mar 1 09:37:15 1996 Date: Fri, 1 Mar 96 09:35:40 PST From: greg (Gregory J. Ward) To: courret@lesosun2.epfl.ch Subject: Re: findglare Hi Gilles, The main problem with pcomb is not that it loses the view information (though this may be why findglare fails), but that it performs arbitrary operations on the pixels, so it is easy to mess up. You can get around the information header stuff by editing it yourself, like so: % getinfo < picture > header % vi header % getinfo - < picture >> header % mv header picture The "getinfo -" command takes a picture from standard input and strips off the header, thus the above replaces the header of "picture" with whatever you create in vi. Hope this helps. -Greg ================================================================ MODELING A LASER From jm@dmu.ac.uk Tue Mar 26 06:12:37 1996 Date: Tue, 26 Mar 1996 14:12:35 GMT From: John Mardaljevic To: greg@hobbes.lbl.gov Subject: Re: Mist extinction Two things: (1) How do I specify a non-diverging ("laser") beam, other than using a long focus spot; and (2) I thought you'd like to know, if you don't already - the Bartlett School (Univ. College London) have just advertised a "Lighting Researcher" post specifically asking for Radiance experience. It's only one year, but I guess they feel they are being left behind without some direct involvement. Be interesting to see who they get. -John From greg Tue Mar 26 11:04:14 1996 Date: Tue, 26 Mar 96 11:03:57 PST From: greg (Gregory J. Ward) To: jm@dmu.ac.uk Subject: laser Hi John, The only way I can think to model a laser beam is to set the radiance to increase as a function of distance, but be non-zero only within a cylinder projected from the source. I.e.: # Beam function with cancellation of fall-off (A1 is beam radius) void brightfunc laser_beam 2 if(Ts*Ts*(1-Rdot*Rdot)-A1*A1,0,Ts*Ts/(PI*A1*A1)) . 0 1 .125 # Laser beam emittance (in watts/sq.meter) laser_beam spotlight laser 0 0 7 1000 0 0 1 0 1 0 # Laser source (radius should match laser_beam A1 above) laser ring laser_source 0 0 8 0 0 0 0 1 0 0 .125 The function in laser_beam does two things -- first, it checks to see whether or not a sample point is within the cylindrical beam. If it isn't, a value of zero is returned. If it is, then a value that will cancel the normal R-squared falloff is returned. This, when multiplied by an initial radiant emittance, will result in the same irradiance for any surface oriented towards the laser source at any distance. The spotlight type is only used for efficiency, so we don't have to check the source in all directions, just 1 degree around the aimed direction. That's curious about the Bartlett School position. I wish Radiance drew the sort of crowd here that it does in England! Maybe I should have you come out to California to run my publicity campaign! -Greg ================================================================ FALSECOLOR IN BLACK AND WHITE From sfa@sizemorefloyd.com Thu Apr 4 14:44:41 1996 Date: Thu, 4 Apr 1996 17:44:30 -0500 To: greg@hobbes.lbl.gov From: "Sizemore Floyd Architects, Inc." Subject: falsecolor question Hi Greg, I have a (hopefully) brief question for you: We am trying to put together a presentation for our clients, where we need to make a _greyscale_ graphical represention of the "value" of one daylighting design over a presumably less optimal one. (Since I have virtually /no time/ to do this,) my idea was to use some perspective renderings of the two designs at issue, then use 'falsecolor' with the -r/-g/-b switches so that the range of falsecolors representing the luminance gradient would go from white to black instead of red to green to blue. We could then (hopefully, with a little help from Photoshop) use these images to visually illustrate our point. (In addition to the daylight factor (etc.) numbers.) The docs say: "The remaining options, -r, -g, and -b are for changing the mapping of values to colors. These are expressions of the variable v, where v varies from 0 to 1. These options are not recommended for the casual user." Well, I'm pretty casual but I'd like to try this anyway... Can you please point me toward any information on how to go about tweaking these options? Hope the intent of what I'm thinking of is clear. Oh yeah, please email me at stuartl@netcom.com instead of the reply-to address. Thanks! -Stuart PS: I've set up a new Pentium Pro in the office with Linux expressly for the purposes of running Radiance. (It's not an Indigo, but...) It multiple-boots Windoze NT and DOS also. Stuart Lewis, Sizemore Floyd Architects, Inc. stuartl@sizemorefloyd.com From greg Thu Apr 4 15:16:52 1996 Date: Thu, 4 Apr 96 15:16:28 PST From: greg (Gregory J. Ward) To: stuartl@netcom.com Subject: falsegrey Hi Stuart, Glad to hear you're still struggling with Radiance in your new capacity.... Try the following: -r 'if(frac((x-y)*v/10)-v,1,0)' -g 'if(frac((x-y)*v/10)-v,1,0)' -b 'if(frac((x-y)*v/10)-v,1,0)' This should result in a dashed line, the length of which is controlled by the value. (I'm assuming you're using the -cl and -p options of falsecolor as well.) I tried it on one example, and it seems to work for B&W output. The only problem is that the dashes aren't computed according to the line orientation (which is unknown to the program), so diagonals one way look better than diagonals the other way. -Greg From stuartl@netcom.com Fri Apr 5 07:56:26 1996 Return-Path: From: stuartl@netcom.com (Stuart Lewis) Subject: grayscale revisited To: greg@hobbes.lbl.gov Date: Fri, 5 Apr 1996 07:56:05 -0800 (PST) Status: RO Hi Greg, I tried both the solutions you offered, but they didn't do /exactly/ what I had in mind- hopefully it's simpler than that. We are not going to contour lines at all on these images. The intent is to convey different degrees of brightness to the viewers (who wouldn't understand the numbers, anyway.) Any ideas? Your solutions /will/ be valuable in the future, since we don't do very many color submittals. Thanks again! -Stuart From greg Fri Apr 5 10:00:59 1996 Date: Fri, 5 Apr 96 10:00:34 PST From: greg (Gregory J. Ward) To: stuartl@netcom.com Subject: Re: grayscale revisited Well, all I can suggest is setting "-r v -g v -b v", thereby mapping to greyscale with a legend, and see if that's good enough. -G From jedev@MIT.EDU Tue May 7 13:10:10 1996 To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: lighting units Date: Tue, 07 May 1996 16:10:15 EDT From: John E de Valpine Greg: I apologize if this is somewhere that I should be able to discover it myself, but what are the proper units for "nits". I assume that "nits" are the appropiate term for radiance and irradiance values, is this correct? And while I am on the subject, when one uses the mouse and the "L" key to query a radiance image what are the units returned, is it lux? How can I create an image that has a series of these values displayed which the image is first opened with ximage? My knowledge of lighting is largely experiential derived through experimentation with radiance, so these are probably fairly naive questions. Thanks. -Jack de Valpine From greg Tue May 7 13:21:18 1996 Return-Path: Date: Tue, 7 May 96 13:21:05 PDT From: greg (Gregory J. Ward) To: jedev@MIT.EDU Subject: Re: lighting units Status: R Hi Jack, "Nits" is shorthand for "candelas per square meter", which is shorthand for "lumens per steradian per square meter". This is the photometric unit reported by the "l" commmand of ximage when run on a standard Radiance picture. If, on the other hand, the Radiance picture was generated using the -i option, then the values reported would be in terms of lux (lumens per square meter). The units of radiance are actually watts per steradian per square meter. These get converted from watts to lumens using a factor of 179 lumens/watt and the proper proportions of red, green and blue to match the human eye's sensitivity to different wavelengths. There is no way to get ximage to come up displaying point luminances, unless you write a program or something to stick mouse events into its input queue. (I don't know enough about X11 to even be sure this is possible.) A better approach is to use the "falsecolor" program to convert your image to one that displays luminance or whatever with a corresponding legend. Check out the manual page for examples. -Greg P.S. Your questions are not too naive. A lot of users don't know this stuff, and it's rather difficult to explain/understand. ==================================================================== COMPILE SWITCHES From audile@onramp.net Fri May 24 10:43:14 1996 Subject: RE: optimized rpict Date: Fri, 24 May 96 12:42:53 -0500 From: Robert Kay To: "Kevin Matthews" , "Greg Ward" Hi Greg and Kevin: I thought I'd let you know of my progress in optimizing the rpict code. Although I'm in almost daily contact with Motorola with bug reports on their compiler, I have made some good progress with the project. The large image in the texture folder which originally took around 30 hours now completes in around 10. The compiler is causing some of the classic size/speed issues; not in the produced code (it is actually smaller) but in the memory usage of the program when running. I have gotten stack overflows several times. As soon as I figure out how to adjust stack size, I will. There are still several outstanding issues which I do not understand. The new rpict cannot properly execute commands (!'s) within an object. It is as if the pipe which sends the data back to rpict always comes up empty. I can't figure out why. Other problems include the inability to fully optimize the files calfunc.c and o_instance.c and particularly header.c, which causes the compiler to crash the machine when it is optimized !!! I have also not been able to get the compiler to properly do interprocedural analysis across different source files, which I'm sure will help quite a bit. I have a plan to make that happen, but first I have to figure out what makes the optimizers freak out with the above three files. Greg -- is there any place I can find out just what all of the Radiance compile switches really are for? Most are obvious, but other's are a little harder to figure out. I have tried to go through the makeall script to figure out when they get set, but it's not quite enough info. So that's where I am. I look forward to making this rpict binary available to everyone when I get it fixed. I really think it makes the Mac a more than just viable rendering platform. Next we'll try to get it out of the UNIX operating system, which slows it down. Seems like it should be possible (with frozen octrees at least). Maybe after that we'll figure out Mac distributed rendering. take care, robert From greg Tue May 28 11:22:32 1996 Date: Tue, 28 May 96 11:22:06 PDT From: greg (Gregory J. Ward) To: robert@audile.com Subject: RE: optimized rpict Cc: matthews@aaa.uoregon.edu Hi Robert, Your work sounds quite ambitious to me! I'm glad that you're making such progress. In answer to your query regarding compile switches, here's a quick list: -DALIGN=(type) Alignment type, machine-dependent. Most RISC architectures align on 8-word boundaries (double). The default alignment type is int. -DSPEED=(MIPS) Millions of instructions per second for this processor (approximate). This is used to decide certain unimportant timing issues such as how many rays to trace before checking input in rview and whether or not to optimize the color table in ximage on 8-bit displays. -DWFLUSH=(rays) Override for number of rays before flush in rview. -DBSD Operating system has a strong Berkeley flavor, meaning that bcopy() and bzero() are present but maybe memcpy() and memset() are not. (See common/standard.h for other things this flag affects.) Also affects certain system calls, such as signal handling and resource tracking. -DDCL_ATOF The function atof() (ASCII to float) is not defined in either stdio.h or math.h, so we need to declare it. In better days, a duplicate declaration of such a function would never have been a problem, but with the advent of prototypes and idiot systems that define these things as macros, portability is Hell. -DBIGMEM The system has lots of RAM available, so size hash tables and the like accordingly. Also provides for larger overall scene descriptions (262,080 primitives rather than 32,704). Even larger scenes can be accommodated by defining MAXOBJBLK > 4096. -DSMLFLT This setting tells Radiance to use short floats (4-bytes) throughout, which saves lots of memory but can cause calculation inaccuracies in many cases. Its use has been discontinued for this reason. Other defines may be used to overcome portability problems on various operating systems. A common trick, for example, is -Dvoid=char to overcome systems that define malloc() as returning pointer to void, which is prohibited on other systems and disagrees with the way I've defined things in Radiance. (Again, this wouldn't be such a problem if there was consistency as to when and where malloc() and other such declarations show up.) I hope this helps. -Greg ========================================================================== TRANS PARAMETERS From jedev@MIT.EDU Fri May 31 11:40:07 1996 From: jedev@MIT.EDU To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: More question on 'trans' Date: Fri, 31 May 1996 14:40:45 EDT Greg: Based on our previous discussions I modeled a cloth membrane in the following manner: void trans trans_cloth 0 0 7 0.35 0.35 0.35 0.1 0 1 0.1 which assuming I have worked this out properly should yield the following behavior: void trans id 0 0 7 r g b spec rough trans tspec d_ref = ((1 - spec) * (1 - trans) * rgb_avg) ((1 - 0.1) * (1 - 1) * 0.35) = 0 I'm not sure about this one, in our last correspondence you said that I needed to add in a (1 - specular_reflection) d_trans = (rgb_avg * trans * (1 - tspec)) (0.35 * 1 * (1 - 0.1)) = 31.5% s_trans = (rgb_avg * trans * tspec) (0.35 * 1 * 0.1) = 3.5% This yields a material that from the interior appears as a light grey and through which the shadows of the external photovoltaics can be seen. But from the exterior this material appears as a fairly dark grey. Now on the one hand this may make sense, because of the darker interior and the high diffuse transmitted component we are seeing a farily dark surface. But the effect that I am now looking for which seems to make intuitive sense to me, but may not make physical sense, is a material that is 'white,' from the outside, 'light grey' inside ,and diffusely transmits approximately 35% of incoming light while reflecting approximately 65%. In the initial case did I model a 'grey' trans material? Would 'white' be something like the following: void trans trans_cloth 0 0 7 1 1 1 0.07 0.2 0.35 0.1 d_ref = ((1 - spec) * (1 - trans) * rgb_avg) ((1 - 0.07) * (1 - 0.35) * 1) = 60.45% d_trans = (rgb_avg * trans * (1 - tspec)) (1 * 0.35 * (1 - 0.1)) = 31.5% s_trans = (rgb_avg * trans * tspec) (1 * 0.35 * 0.1) = 3.5% How does roughness get factored in? What am I missing? Is there somewhere that I should look to find these various formula/relationships? As always thank you very much for your time. -Jack de Valpine From greg Fri May 31 12:04:23 1996 Date: Fri, 31 May 96 12:03:57 PDT From: greg (Gregory J. Ward) To: jedev@MIT.EDU Subject: Re: More question on 'trans' Hi Jack, The formulas for various materials may be found in the file ray/doc/material.1, or in PostScript form in ray/doc/ps/material.1. What you have written here is mostly correct, except that the diffuse transmittance needs another factor of (1 - spec) in there. The problem is as you have stated it; i.e., you want more diffuse reflection than your original material has. It would be highly abnormal for a material to diffusely transmit light without also diffusely reflecting it, which is why the results disagree with your intuition (which I would call 'experience'). Your revised values I think should give you a much more satisfactory, and realistic, result. The roughness determines how much specular light is scattered, which affects how clear reflected and transmitted images appear. I am not sure in this case if you really want any specular reflection -- I would be tempted to set the fourth argument to zero. If you set the roughness also to zero, then you will be able to see objects clearly through the material. I don't know if this is what you want, though. -Greg ==================================================================== SOLAR ECLIPSE From jromera@dolmen.tid.es Thu Oct 19 10:24:32 1995 Date: Thu, 19 Oct 1995 18:25:17 GMT From: jromera@dolmen.tid.es (Juan Romera Arroyo) To: greg@hobbes.lbl.gov Subject: ECLIPSES Hi Greg, long time ago I sent you a mail about simulating an eclipse. The problem was that the penumbra was not accurate. I'm trying to reproduce the situation of the next total eclipse (24th Oct 1995) The radiance file I'm working on is something like this: void plastic blue 0 0 5 0.2 0.1 0.8 0.00 0.00 void plastic gray 0 0 5 0.8 0.8 0.8 0.00 0.00 void illum sunlight 1 bright 0 3 90000 90000 90000 void light bright 0 0 3 5 0.2 0 sunlight sphere sun 0 0 4 -70.837471 162.4618481 71.9712839 109.245 !xform -s 1.0 -t 20282.884754 10793.67311 4681.0978365363 earth.norm !xform -s 0.273 -t 20232.3077 10767.19458 4669.9966 moon.norm earth.norm is: blue sphere ball 0 0 4 0 0 0 1.0 moon.norm is: gray sphere ball2 0 0 4 0 0 0 1.0 The coordinates of the moon, earth and sun are exactly the same ones as on 24th Oct. 1995 at 4:30 PM GMT. All of them are scaled to the radius of the earth. (Radius earth=1.0) When I render this file using rpict with the following parameters: rpict -ps 1 -dj 0.50 -pj .9 -ds 0.1 -vv 2 -vh 2 -x 480 -y 480 -t 5 -vp 20200 10740 4650 -vd 82 53 31 -av .00 .00 .00 I get the moon shadow on the earth. However despite the "artifacts" I get in the border of the shadow, it's also too big and too much sharp. The area of total obscurity should be much smaller than the one I get. Am I doing something wrong ? maybe changing the rpict parameters ? ... Hope you can help me on this. Thank you Juan Romera From greg Thu Oct 19 11:14:39 1995 Date: Thu, 19 Oct 95 11:14:22 PDT From: greg (Gregory J. Ward) To: jromera@dolmen.tid.es Subject: Re: ECLIPSES Hello Juan, I do not understand why you used an illum for your sun source. I would have just used the light specification with the illum arguments, i.e.: void light sunlight 0 0 3 90000 90000 90000 Anyway, this doesn't affect penumbra so it doesn't really address your question. The basic problem you are facing is the fact that Radiance does not do high-accuracy penumbra for spherical sources. I would recommend that you replace your sun with a tessellated sphere. I took the following right from the gensurf man page: !gensurf sunlight sun 'X+R*sin(PI*s)*cos(2*PI*t)' 'Y+R*cos(PI*s)' \ 'Z+R*sin(PI*s)*sin(2*PI*t)' 7 10 -e 'R:109.245' \ -e 'X:-70.837471;Y:162.4618481;Z:71.9712839' Use it in place of your solar sphere, and I think you'll start to see a better penumbra. -Greg ================================================================== RETROREFLECTORS From esp@sirius.com Mon Oct 9 16:48:41 1995 Date: Mon, 9 Oct 1995 16:50:58 -0700 To: gjward@lbl.gov From: esp@sirius.com (Erich Phillips) Subject: SAE Retroreflectors (and your memory) Hi Greg- You may remeber sometime ago, you helped me figure out how to model retrofeflectors on cars as specified by the SAE. I am trying to resurrect this, and I am confused on some points. The SAE specs give values in millicandelas per incident lux. This strange unit is provided for 3 entrance angles at each of 2 observation angles. The table for a red retroreflector is as follows: 0 degrees 10 degrees 20 degrees Observation Entrance Entrance Entrance Angle (deg) Angle Angle Angle 0.2 420 280 140 1.5 6 5 3 The SAE spec also states that "...reflectors may have any linear or area dimension; but, for the photometric test, a maximum projected area of 7740 mm2 contained within a 254mm diameter circle shall be exposed". As I recall, your calculations took this area restriction into account somehow. If you wouldn't mind, could you help me resolve this puzzle? I have attached the following three files that you created before: reflector.cal - the function file sae_refl.dat - the data file for red reflectors sae_retro.rad - the radiance description file for a 2" x 2" square reflector Lastly, my old mail server appears to be down. I can also be reached at esp@sirius.com, the server from which this message originates. Thanks, {reflector.cal Retroreflector BRDF's from Greg Ward A5 is the surface area of the reflector in square meters } {divied value by projected area} sae_refl(v) = v / Rdot / A5; {angle of source to normal} sae_theta (x,y,z) = Acos(x*Nx+y*Ny+z*Nz)*180/PI; {angle of view to source} sae_phi(x,y,z) = Acos(-(x*Dx+y*Dy+z*Dz))*180/PI; # sae_refl.dat 2 0 20 3 .2 1.5 2 1.67 .026 1.12 .019 .558 .011 # sae_retro.rad # # SAE Red Road reflectors # # Units originally in inches # void metdata sae_red 5 sae_refl sae_refl.dat reflector.cal sae_theta sae_phi 0 5 1 .01 .01 .9 .00258 sae_red polygon reflector 0 0 12 -1 -1 -240 -1 1 -240 1 1 -240 1 -1 -240 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Erich S. Phillips, Ph.D. email: esp@sirius.com Biophysics work: (415) 597-4300 FTI Corporation FAX: (415) 597-4344 55 Hawthorne Street, 10th Floor San Francisco, CA 94105 U.S.A. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< From greg Mon Oct 9 17:55:47 1995 Date: Mon, 9 Oct 95 17:55:30 PDT From: greg (Gregory J. Ward) To: esp@sirius.com Subject: Re: SAE Retroreflectors (and your memory) Hi Erich, I do remember this, though not terribly well. Are the files I did for the reflector data you included in your message? If so, I have not succeeded in reproducing the values in the sae.dat file, though the calculation seems to be generally correct, at least if the millicandelas/lux are relative to lux measured perpendicular to the reflector plane. If the lux values in the SAE spec. are measured relative to the incident beam rather than the reflector, then an additional correction factor may be needed. As for the limitation on measurement conditions, there is no correction I can see for the fact that the reflector may exceed the measurement conditions. As long as your reflector is less than 12 square inches and fits within a 10" diameter circle (converting to units my brain is familiar with), there is no need for concern. Is this not the case? -Greg From esp@sirius.com Tue Oct 10 17:37:17 1995 Date: Tue, 10 Oct 1995 17:39:49 -0700 To: gjward@lbl.gov From: esp@sirius.com (Erich Phillips) Subject: metdata and my ignorance (clue: GDG) Hello again, do you like my hat? No, I do not like that hat. Goodbye, again. Goodbye! Do you know the source? You get a wonderful prize for the correct answer. This is not really why I rang. I admit that I am still confused by the metdata material type. I do not understand exactly what it is I am computing for the data file. In the example I sent you, how would one compute the values that go into the sae_refl.dat file? For example, at 0 degrees entrance angle and 0.2 degrees observation angle, the sae spec is 420 millicandela per incident lux. Do I want to compute the effective reflectance? In my understanding, this would be Reflectance = Pi*Luminance/Illuminance. But I do not know the luminance, I know that I have a reflected luminous intensity of 0.42 candela for each incident lux. Furthermore, why in the reflector.cal function file am I dividing by the projected source area (I assume Rdot*A5 is the projected source area). I guess what I am asking is how would one structure the files to create a retroreflector meeting the SAE specs. And one more related question, if I am permitted. I also have specs for some retroreflective sheeting where the reflectance is given in units of specific intensity per unit area (such as candela/lux/square meter). How would one perform a similar computation with such data. I thank you for your prompt attention, as usual, to my questions. If my ignorance should ever become too intolerable, please feel free to question my heritage. Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Erich S. Phillips, Ph.D. email: esp@sirius.com Biophysics work: (415) 597-4300 FTI Corporation FAX: (415) 597-4344 55 Hawthorne Street, 10th Floor San Francisco, CA 94105 U.S.A. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< From greg Wed Oct 11 10:09:46 1995 Date: Wed, 11 Oct 95 10:09:29 PDT From: greg (Gregory J. Ward) To: esp@sirius.com Subject: Re: metdata and my ignorance (clue: GDG) Hi Erich, I don't blame you for your confusion in the least. I'll have to say that those are about the most confusing units I've ever run across, even in lighting! Here's my logic for the conversion of 420 millicandelas/lux to the required units of 1/steradian for the BRDF of metdata: 420 millicandelas/lux -> 420/1000 candelas/lux -> .42 lumens/sr/(lumens/m^2) -> .42 m^2/sr Note that the photometric quantities cancelled, so 420 millicandelas/lux might as well be .42 (watts/sr)/(watts/m^2) (as long as we don't take the spectrum into account). Now, the metdata calculation in Radiance looks like the following: radiance (watts/sr/m^2) Lo = spec * cos(theta_i) * omega_i * Li * f where: f = computed BRDF in 1/sr omega_i = solid angle of source in sr Li = avg. radiance of source in watts/sr/m^2 theta_i = angle between surface normal and source direction spec = multiplier for specular component from material arguments >From the above formula, the incident beam irradiance is simply: beam irradiance (watts/m^2) Ei = omega_i * Li This is the quantity our SAE spec is in terms of, so our formula should resemble the following: radiant intensity (watts/sr) = SAE_value * Ei However, since Radiance never deals in units of radiant intensity, only in units of radiance, we need to divide the radiant intensity by the projected area in the viewing direction to get the radiance, i.e.: Lo = SAE_value * Ei / A_proj where: A_proj = A * Rdot >From this, we can see that our BRDF is actually: f = SAE_value / (A * Rdot) / cos(theta_i) It was this final cos(theta_i) that I didn't have in my original calculation, I guess because I thought the lux value in the SAE spec was relative to the plane of the reflector. Thinking more on it (and you should check this), it makes more sense that the lux is relative to the incident beam, which may be at an angle to the reflector. So, getting back to your original question, you can either plug the SAE values into the data file directly (dividing each by 1000), and modify the .cal file so that: sae_refl(v,x,y,z) = v / Rdot / A5 / (x*Nx+y*Ny+z*Nz) ; Or, probably better, divide each value by the cosine of the incident angle. Taking the table you gave me: 0 degrees 10 degrees 20 degrees Observation Entrance Entrance Entrance Angle (deg) Angle Angle Angle 0.2 420 280 140 1.5 6 5 3 This yields a data file of: 2 0 20 3 .2 1.5 2 .42 .006 .276 .0049 .132 .0028 Just one final warning. You should be very careful about how you analyze the data from your calculations involving reflectors. A simple Radiance picture may give you correct pixel values, but these super-bright pixels will of course flare out when they get to your eye and retina under scotopic adaptation. I think that's the wisdom behind using luminous intensity rather than luminance. As for the sheeting, you can simply remove the area condition from the sae_refl formula in the .cal file -- i.e. take out A5. This should do it. Now, "Go, Dog, Go!" -G From greg Wed Oct 11 10:13:18 1995 Date: Wed, 11 Oct 95 10:13:03 PDT From: greg (Gregory J. Ward) To: esp@sirius.com Subject: P.S. I forgot to mention that my original calculation could actually have been right, but I don't have the data so I can't check whether or not I divided by the additional cosine factor in computing those values. Also, you really need to find out for sure what plane the SAE lux value is measured in. From esp@sirius.com Wed Oct 11 12:11:33 1995 Date: Wed, 11 Oct 1995 12:14:04 -0700 To: gjward@lbl.gov From: esp@sirius.com (Erich Phillips) Subject: Thanks and YOU WIN!! Greg- Thank you for you e-mail. What you are saying, believe it or not, makes things much clearer. I checked, and in fact the illuminance in the SAE spec is relative to the incident beam, not the surface of the reflector. This actually comes from the definitions of Specific Intensity (SI) and Specific Intensity per Unit Area (SIA), as defined in Federal Test Method Standard 370, "Instrumental Photometric Measurements of Retroreflective Materials and Retroreflective Devices," March 1, 1977. Not that's a mouthful. I found reference to this standard in a NHTSA sponsored study on Truck Conspicuity (one of my favorite words). If you have any interest in the section dealing with photometry of retroreflectors, please let me know and I would be happy to send it. Now on to more important matters. I don't know about you, but I really enjoy rediscovering things from my childhood through the eyes of my kids. "GO DOG GO" is one good example. Now, I must come up with a suitable prize for your correct answer. Let me think on it, and I will come up with something guarenteed not to disappoint. Perhaps a party hat... Thanks again, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Erich S. Phillips, Ph.D. email: esp@sirius.com Biophysics work: (415) 597-4300 FTI Corporation FAX: (415) 597-4344 55 Hawthorne Street, 10th Floor San Francisco, CA 94105 U.S.A. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< +-------------------------------------------------------------------------+ | Radiance Digest mailing list (mod.): greg@hobbes.lbl.gov | | Mail requests to subscribe/cancel to: radiance-request@hobbes.lbl.gov | | Archives available from: ftp://hobbes.lbl.gov/pub/digest | | Radiance world-wide web site: http://radsite.lbl.gov/radiance/ | +-------------------------------------------------------------------------+ ~sRadiance Digest, v2n2 Dear Radiance Users, Here is the latest backlogged collection of electronic mail exchanges on various topics of hopeful interest. As always, you can search for the subjects you want. STRANGE_VIEWS Methods for generating odd images SMLFLT_OPTION Problems with SMLFLT compile switch in 2.0 ANTIMATTER How to use antimatter type DAYLIGHT_SIMULATION Understanding daylight simulation options LUMINOUS_EFFICACY Change in luminous efficacy between 1.4 and 2.0 RPICT_PARAMETERS What are the useful ranges of rpict parameters? GENSURF New gensurf capabilities and making teapots ALIASING Aliasing and image representation SHARED_PICTURES Sharing picture files AMIGA_PORT New port available for Amiga DECSTATION Problems running on DECstations INFRARED Using Radiance in infrared spectrum SPECULARITY_BUG Bug in specular highlights of 2.0 VIEW_INFO Getting view information from files BACKGROUND_COLOR Changing the background color UPFRONT_TRANSLATOR Translator now available for Alias UpFront! SCENE_FLATTENING Flatting Radiance scene descriptions I intend to release version 2.1 shortly, and will make an announcement at the appropriate time. -Greg ========================================================================== STRANGE_VIEWS Date: Wed, 15 Jan 92 16:25:30 -0500 From: David Jones To: Greg Ward Subject: Radiance question Hi Greg, I would like to generate an image which, instead of pixels indexed by X and Y on an image PLANE, would have pixels indexed by angles of azimuth and elevation. I know I could work out some transformation to take a rendered image and "warp" it, but it would be much more efficient to do it directly. Any suggestions? thanks, Dave Date: Wed, 15 Jan 92 16:06:30 PST From: greg (Gregory J. Ward) To: djones@lightning.McRCIM.McGill.EDU Subject: Re: Radiance question Hi Dave, Nice to hear from you again. How have you been getting on in your new position? I assume by your question that the -vta fisheye view type is not satisfactory for your purposes. This option does produce an image with similar properties to those you are asking for, but the center of the image corresponds to a polar angle of 0 and surrounding pixels correspond to a polar angle that is proportional to the distance from the image center. Azimuth angle can be measured as distance (in radians) about a circle whose center is the center of the image. I can give you the equations for theta and phi as a function of pixel location for this type of image if you like. To produce an image whose upper left corner is (theta,phi)=(0,0) (theta is the polar angle and phi is the azimuth), and whose upper right corner is (theta,phi) = (0,2pi), and whose lower left corner is (pi/2,0), use the following command (setting the -e values to suit): % cnt 256 512 | rcalc -e 'xres:512;yres:256' \ -e 'vpx:0;vpy:0;vpz:0' \ -e 'vdx:1;vdy:0;vdz:0' \ -e 'vux:0;vuy:0;vuz:1' -f polar.cal \ | rtrace [options] -faf octree \ | pvalue -r -df -y 256 +x 512 > polar.pic Here is the file polar.cal: { Compute polar directions for a given view point, direction and up vector } { compute right and top vectors } rux : vdy*vuz - vdz*vuy; ruy : vdz*vux - vdx*vuz; ruz : vdx*vuy - vdy*vux; lru : sqrt(rux*rux+ruy*ruy+ruz*ruz); rx : rux/lru; ry : ruy/lru; rz : ruz/lru; lvd : sqrt(vdx*vdx+vdy*vdy+vdz*vdz); dx : vdx/lvd; dy : vdy/lvd; dz : vdz/lvd; tx : ry*dz - rz*dy; ty : rz*dx - rx*dz; tz : rx*dy - ry*dx; { get input pixel values } x = $2; y = $1; { comute theta, phi directions } theta = PI/2 * (y+.5)/yres; phi = 2*PI * (x+.5)/xres; ct = cos(theta); st = sin(theta); cp = cos(phi); sp = sin(phi); { output view point } $1 = vpx; $2 = vpy; $3 = vpz; { output view direction } $4 = dx*ct + rx*cp*st + tx*sp*st; $5 = dy*ct + ry*cp*st + ty*sp*st; $6 = dz*ct + rz*cp*st + tz*sp*st; ---------------------------------- I gave the above a try and it seems to work for my simple example. I hope that changes of view and output resolution are clear, and I think you can figure out how to modify it for different ranges of theta and phi. If you don't have version 2.0 of Radiance, this is not going to work without some modifications. Let me know how it turns out! -Greg =================================================================== SMLFLT_OPTION Date: Thu, 16 Jan 92 18:22:19 -0500 From: David Jones To: Greg Ward Subject: trouble with lighting I know I've enountered this problem before, but I forget the solution. Some of my surfaces look speckled. I have deposited a file in hobbes:/pub/xfer/speckle.pic.Z. I dumped this out of rview. Do you have any sugestions? dj Date: Thu, 16 Jan 92 15:47:17 PST From: greg (Gregory J. Ward) To: djones@lightning.McRCIM.McGill.EDU Subject: Re: trouble with lighting Hi David, Actually, I don't think you've run into this exact problem before. There are other instances where using the -dj option can cause black spots, but I don't think that's the case here. The problem (my guess) is that you said you wanted to use huge models during the Q&A session before building the 2.0 distribution. Answering yes to this particular question can (as you were warned) result in some artifacts. This is the price of using single rather than double precision numbers for the geometry calculations. I don't recommend it unless memory is really at a premium, for obvious reasons. To rebuild the distribution using double precision arithmatic throughout, do a makeall clean followed by a makeall install. Say "yes" to the question about modifying the rmake command, and remove the -DSMLFLT option therein. If -DSMLFLT is not in your rmake file, then I'm wrong and I'll have to think a little harder to figure out what's going on. -Greg Date: Thu, 16 Jan 92 19:43:56 -0500 From: David Jones To: greg@hobbes.lbl.gov Subject: Re: trouble with lighting and have you done away with "x11dev" ? I can't tell whether I botched the install or whether an script I use explicitly refers to x11dev when it is no longer needed. I the old x11dev from before and it seems to work. I seem to have trouble changing the viewpoint. For example, I would change it to 100 100 50 and then it would be 3.399, 3.399, 1.777 (or something else way off). If this persists after the SMLFLT change, I'll let you know. Otherwise don't worry about it unless it sounds familiar. dj Date: Fri, 17 Jan 92 09:33:21 PST From: greg (Gregory J. Ward) To: djones@lightning.McRCIM.McGill.EDU Subject: Re: trouble with lighting Hi Dave, Thanks for pointing out the problem with viewpoint changes. It's a bug associated with the SMLFLT option that I hadn't caught. It will be fixed in the next distribution. I don't think enough people are using this compile switch to make it worth sending out a patch. The X11 driver is now built in as the default in rview, so you can either specify no -o option or use -o x11 if you want to be explicit. The old x11dev separate program driver will still work, but it's less efficient. -Greg ============================================================ ANTIMATTER Date: Fri, 17 Jan 92 13:34:09 -0500 From: David Jones To: greg@hobbes.lbl.gov Subject: Re: I may be right ... Sorry for all the questions today, but ... I want to construct a shape and I think antimatter is appropriate, though I have never been able to get antimatter to work. I don't think I understand the instructions in ray.1 I want to start with a cylinder CYL1 made of material MAT1 and then take another cylinder CYL2 which intersects CYL1 and cut out the intersection. I don't want CYL2 visible, but when a ray passes from the invisible CYL2 into the volume of CYL1 I want a surface to be visible and made of MAT2. Can I do this with antimatter? dj Date: Fri, 17 Jan 92 13:54:48 -0500 From: David Jones To: Greg Ward Subject: last one for today I hope Greg, So now I have a complicated scene with lots of Radiance description files. I get the following error message: oconv room2.r > room2.oct xform: (standard input): unknown object type "xform" xform: (standard input): unknown object type "xform" I know this will be easy to fix but where the heck in all the files oconv has looked does this error occur? Would it be easy to add a "-v" option to "oconv" that printed all the file names as things were opened and closed? This will pinpoint where this xform error message is coming from. dj Date: Fri, 17 Jan 92 11:18:32 PST From: greg (Gregory J. Ward) To: djones@lightning.McRCIM.McGill.EDU Subject: Re: I may be right ... MAT1 cylinder CYL1 0 0 7 ... void antimatter AMAT2 1 MAT2 0 0 AMAT2 cylinder CYL2 0 0 7 ... AMAT2 ring CYL2.cap1 0 0 8 ... AMAT2 ring CYL2.cap2 0 0 8 ... The above should work as you describe. The rings at each end are necessary to make CYL2 an enclosed solid. Remember that the materials MAT1 and MAT2 cannot be of type trans or glass, and that the viewpoint must be outside of all volumes involved. The second problem is more difficult. I suppose a verbose option could be added that would mention every opening of a file or starting of a command in oconv and xform, but the output would be voluminous to say the least. I recommend instead that you use the following command to try to localize the error: % xform -e room2.r |& more The error message will appear shortly before the correspoding point in the expanded file. -Greg Date: Sun, 19 Jan 92 16:26:00 -0500 From: David Jones To: Greg Ward Subject: "popen" woes So I can't tell whether this is the fault of my radiance description files (though I really doubt it) or a bug in Radiance or some problem with our SPARCs. I am still getting difficulties with "oconv file.r" when "file.r" contains a lot of recursive "! cat otherfile.r | xform ...". The new twist is that I am printing out debugging messages to trace your popen() and pclose() calls. The symptom is that "oconv" just hangs midway through its job, but only sometimes. If I kill the process, delete the ".oct" file and restart it, then it might work the next time. How would oconv react if the system ran out of process-table entries or max # of open files, or something like that. Would it silently hang? dj Date: Sun, 19 Jan 92 15:33:35 PST From: greg (Gregory J. Ward) To: djones@Olympus.McRCIM.McGill.EDU Subject: Re: "popen" woes Hi Dave, You shouldn't even be using "cat file | xform [args]" -- "xform [args] file" is much more efficient and involves fewer processes. You should also include "-e" as the first optiont to xform to reduce the number of open processes. I don't know what happens when the system runs out of process table entries under SunOS. If you run out of open file entries (although a depth up to 32 is safe on most machines), then oconv will report an error message. The most common reason for oconv to hang is if you forgot to give xform or some other command an input file and it is trying therefore to read from the standard input. This has unpredictable results, which is consistent with the behavior you are reporting. Look out for commands such as "cat | xform .." or "xform [args]" without a file name. I hope this helps. -Greg ================================================================= DAYLIGHT_SIMULATION Date: Fri, 17 Jan 92 09:42:41 PST From: kovach@ise.fhg.de Subject: Radiance To: GJward@lbl.gov Hi Greg, I just left LBL and already I have more questions. I just talked to Peter Ap. about radiance and I have a few questions about it. I think it would be a great tool to use to get an idea on the irradiation distribution on a outer building surface and where the best place would be for PV modules. I have two questions to that point: 1. I need the values of incident irradiation and are these available as an output of the program? (Or could an extension to the program be written to obtain a output data file (for example) with the incident irradiations in it??) How much work would it entail? Would the person have to be very familiar with the code of the entire program or just a part of it? 2. How long would it take to simulate the exterior of a building with a few overhangs, neighboring buildings , ambient conditions (eg. reflection from white surfaces) and with a resolution of 6 " in real space?? How about the same conditions with a resolution of 12 " in real space? A rough estimate to these questions would be helpful! (P.S. Peter Jaegle and I tried the 'talk' command but were unable to make a connection!) Thanks , Anne Kovach From: greg (Gregory J. Ward) To: kovach@ise.fhg.de Subject: Re: Radiance Hi Anne, I think that it wouldn't be too difficult to use Radiance for the purpose you describe. With version 2.0, it is possible to get either individual irradiance values or irradiance pictures using rtrace and rpict, respectively. It is not necessary to do any programming. I will be happy to help you with the right commands at the appropriate time. In reference to your second question, the time required for geometric modeling depends on whether or not you use a CAD program to do it and how familiar you are with this type of work. Assuming that you are just a beginner and have no CAD program or modeling experience (but keeping in mind that you are a bright woman), I would guess that it will take you between a few days and a week to get the exterior model shaped up using a text editor and working directly with Radiance. I don't think the resolution will make that much difference in the modeling time. -Greg From: sick@ise.fhg.de Subject: falsecolor units To: greg@hobbes.lbl.gov (gregory ward) Date: Tue, 21 Jan 92 11:03:43 MEZ Hello, I started working with Radiance a few weeks ago. Peter Apian-Bennewitz was a great help in getting started. I used the new falsecolor program which might be very helpful for me since I often need quantities (numbers) rather than qualities (nice realistic pictures). Now here is my problem: I do not know the "old luminance unit" (dictionary) nit. How does it convert to lm/m*m-sr ? I am also confused with the options "s" and "m": Is the value of s a multiplier of 1000 or replaces it 1000? Should it be set to the highest expected value of , well, luminance, illuminance radiance or irradiance? If I change the multiplier for a unit conversion, I assume that s is not affected. Correct? Finally, to make me get both a quick correct result and an example to follow the conversions, could you tell me how to produce a falsecolor picture with values of irradiance in W/m*m? I really appreciate your efforts. The work I am currently doing will be part of a presentation at an IEA SHCP Task 16 meeting in Madrid at the beginning of February. Depending on the outcome of the discussion, Radiance might become a major tool for the group. So much as an incentive for you ... Sincerely, Fred Sick Date: Tue, 21 Jan 92 08:51:46 PST From: greg (Gregory J. Ward) To: sick@ise.fhg.de Subject: Re: falsecolor units Hello Friedrich, The "scale" value of falsecolor as set by the -s option determines only the maximum charted value in the image. This is as you supposed unaffected by the -m "multiplier" option which determines the units being charted. The luminance unit of 1 nit is in fact 1 lumen/steradian/m^2, so it is already in SI units. A multiplier of -m 1 produces values in the native unit of Radiance, which is watts/steradian/m^2. The only way to correctly produce illuminance or irradiance values is by rendering the image with the -i option of rpict. Then, simply apply falsecolor as you would have to produce luminance or radiance values. If any of this is still not clear to you, please do not hesitate to ask me further questions. -Greg Date: Mon, 27 Jan 1992 18:38:06 EST From: MICHAEL DONN To: greg@hobbes.lbl.gov Subject: Daylighting models in RADIANCE 2.0 We are trying to model the "real" sky using radiance and wish to test a couple of simple buildings against our artificial sky (a mirror box) which we have evaluated against real measurements in a real building we analysed for the architects last year. Our difficulty is knowing exactly what gensky is modelling. As we are using an artificial sky, there are all the usual problems of defining what is typical overcast cloudy sky luminance, and what its distribution is. Some of our problem is in knowing exactly what the -av parameter does in Rpict; it is also in knowing what parameters to input to skyfunc in order to properly model the sky clearness etc; and, it is also to model the sun's luminance allowing for local conditions as precisely as we can. In summary, in order to calibrate RADIANCE, we are modelling a simple grey box with one window, and comparing the daylight factors, and the actual light levels predicted against each other, and need to know as much about the assumptions behind the RADIANCE values as we do about the other mirror box values. From greg Mon Jan 27 09:02:00 1992 Return-Path: Date: Mon, 27 Jan 92 09:01:53 PST From: greg (Gregory J. Ward) To: donnmr@matai.vuw.ac.nz Subject: Re: Daylighting models in RADIANCE 2.0 Status: RO The gensky program in Radiance uses the standard CIE distributions for clear and overcast skies. The precise formulas used are in the source file ray/src/gen/gensky.c and ray/src/gen/sun.c and the function file ray/lib/skybright.cal. This function file is where the actual distribution is calculated, using the zenith luminance and ground luminance values plus the sun direction given by gensky. Zenith brightness is calculated from the solar altitude and atmospheric turbidity using a formula developed from data gathered in San Francisco, which may not be very accurate for other places on the globe. For better control, it is preferable to measure or assume a certain zenith brightness and give it directly to gensky using the -b option. It is not too difficult to develop your own function file to specify whatever sky distribution you like. Gensky is provided only as a basic skylight approximation. -Greg Date: Mon, 16 Mar 92 18:46:08 PST From: greg (Gregory J. Ward) To: edu@leicester-poly.ac.uk Subject: Re: daylighting Hi John, Thanks for your recent letter. I will attempt to answer your questions: > a) Having used mkillum for the windows, will one or more ambient bounces for > the interior space only re-distribute light in the space (necessary if > light-shelves etc. are present) or will the mkiwindow description allow more > light from the outside to enter the space messing up the calculation? > Perhaps a note could be added to the Tutorial about this. The illum created by mkillum will block all "ambient" rays during the calculation, maintaining the correct energy balance. You can set the -ab parameter to whatever value you like, as it will only be used to compute interreflection within the space. (That is, assuming you don't have any cracks or other places where light might leak through.) I will try and add an appropriate note to the tutorial, as you suggest. > b) I would appreciate some guidance about appropriate values of av to use in a > simulation and how they are determined. I wish I had a definitive answer to this question. Unfortunately, there is no general way to set the ambient value that works for every scene. My best recommendation is to use rview in the following manner. Start the program with the -ab parameter set to 1. Run it for a short while at an appropriate viewpoint, then set ab to zero using the command "set ab 0". Immediately afterwards, try setting the av parameter to different values until the new pixels being computed seem to match fairly well to the original ones computed when the interreflection calculation was on. (If you're clever, you can figure out a ballpark starting value but I think I'd have to show you how.) I suggest that you set a grey ambient value for the most natural color rendering (eg. av .5 .5 .5). Alternatively, it is possible to compute an approximate value for the -av parameter based on room reflectances and light source intensities, but this does not work very well in daylight situations. > c) The dayfact script. I have been looking at ways of speeding this up, using > a coarser grid and larger radius for filtering is a possibility (I suspect i'm > not using the most efficient a* settings either). On the practical side I > have modified it produce 10 contour levels up to a maximum of DF = 10, 20 or > 40, depending on the users expectation. Also, it's worth keeping the pic file > in the event of a poorly chosen range of levels. Since, i'd prefer lines, > and since these appear mid-way between markers, i'll try to find a way to set > the marker to match the contour level directly above. Do you want me to send > you the end product? Also, the conversion factor to Lux is 470, should it > be 179? Didn't I send you my repaired version of dayfact? I thought I had fixed the incorrect value of 470 and also made it keep the pic files around. I can send the latest for you to work on if you like. I would like to see your finished product, but it would be better if you started with the most current version! > d) Am I right in thinking that I can use the indirect calculation as long as > I don't have any reflecting surfaces exterior to the building? Or do I just > have to get rid of the ground plane. You should always be able to use the indirect calculation, it just so happens that it doesn't work very well when you have a large ground plane due to the adaptive sampling algorithm used in 2.0. (Thank you, by the way, for showing me just how bad it can be!) You are better off getting rid of the groundplane and just using a light source or illum for the window with the skyfunc distribution, which already accounts for reflection from a groundplane. If you have external reflecting surfaces, then you must either use mkillum (recommended) or an interreflection calculation. If you use an interreflection calculation with 2.0, then you had better make the ground plane small or get rid of it altogether. If you want to have other external surfaces, that should still work. (I apologize for this rather convoluted answer. I hope you managed to untangle my meaning.) -Greg From: Environmental Design Unit Date: Mon, 30 Mar 92 14:03:03 BST To: greg@hobbes.lbl.gov Subject: Daylighting Calculations Hello Greg, A hue-sat color wheel would be nice addition, especially for colour matching paint samples. In the meantime, a cut-down list of colours from rgb.txt does fine. Just for a change, i'd like to ask you some questions about daylighting calculations. a) Sky models and conversions factors. The zenith radiance is evaluated in "gensky.c" as if (cloudy) { zenithbr = 8.6*sundir[2] + .123; zenithbr *= 1000.0/SKYEFFICACY; } else { and ground radiance as if (cloudy) { groundbr = zenithbr*0.777778; printf("# Ground ambient level: %f\n", groundbr); } else { and horizontal illuminance in lux is simply ground ambient level * PI * luminous efficacy. O.K. so far, but the luminous efficacy definitions in "color.h" have me confused - /* luminous efficacies over visible spectrum */ #define MAXEFFICACY 683. /* defined maximum at 550 nm */ #define WHTEFFICACY 179. /* uniform white light */ #define D65EFFICACY 203. /* standard illuminant D65 */ #define INCEFFICACY 160. /* illuminant A (incand.) */ #define SUNEFFICACY 208. /* illuminant B (solar dir.) */ #define SKYEFFICACY D65EFFICACY /* skylight */ #define DAYEFFICACY D65EFFICACY /* combined sky and solar */ It looks as if the zenithbr for a cloudy sky is defined in terms of lum eff = 203 lum/W, whilst the multiplier in "dayfact" is 179 lum/W. Am I missing the point somewhere? I have tried, without much success, to locate papers/texts on sky models and luminous efficacy values for my own notes. I would appreciate some recommendations if you have them to hand. b) Dayfact output. The "dayfact" pictures showing daylight factors and lux levels for our new engineering building (floor plan 25m by 7m, 56 windows and light shelves) was not terribly satisfactory - bands (and lines) were broken and it was difficult to determine the areas they enclosed. However, gaussian filtering of the saved illuminance picture improved matters greatly. Single pass filtering with r=3 proved enough for bands - pictures with lines were even better. Do you want me to mail you results as a uuencoded tar.Z file? [size < 1Mb] I wanted to compare the effects of filtering at different radii in one picture by using "pcompos" to put four illuminance pictures in columns to make one illuminance picture file for "dayfact". This strategy would be useful for comparing the consequences of the different a options for "rtrace". To my surprise, "dayfact" determined lux values for the combined pictures approx 1/4 what they were for individual pics! So, stroke monolith, toss bone in air... four pics, 1/4 the lux values, something to do with the area or num of pics? Looked closely at "falsecolor" and "dayfact" scripts, but couldn't find where picture area was significant. c) Changes to dayfact. For my own use i've set ten bands/lines for df and lux pics and default maximum values for both. Lines do show up better than bands, especially after r=3 filtering, but the problem remains about the values appearing midway. Could you suggest a fix for this. For a linear scale, adding the smallest value to each of the values in the list would cure it, as long as we remember that the number refers to the line above. The scales would come out better also e.g 0.5, 1.0, 1.5... instead of 0.25, 0.75, 1.25... ( for -s 10 -n 10 ). I would like to use "dayfact" on multiple illuminance pictures of the same scene, if you confirm that it simply scales with the number of pics in the composite I will add a multiplier option also. Not forgetting, large radius filtering to smooth the illuminance picture. I think this is everything .... for the moment. As always, thanks for the continued support. -John Mardaljevic edu@leicp.ac.uk Date: Mon, 6 Apr 92 17:55:27 PDT From: greg (Gregory J. Ward) To: edu@leicester-poly.ac.uk Subject: Re: Daylighting Calculations Hello John, Sorry for the delay in my response. I was away at a meeting. a) Sky models and conversion factors The efficacy values listed in color.h are over the visible spectrum only, so may disagree with some other values you have seen. You are correct in pointing out the discrepancy between the value used in gensky.c vs. the value used in the dayfact script. To reproduce the original photmetric values, these two factors should be identical if a grey sky were used. Since the sky is not white, we should be using a different value to account for the relative efficiency of the sky's spectrum, coupled with an appropriate RGB multiplier on the source in the scene file. Unfortunately, I did not have a good spectral curve for the sky, so I used the CIE standard for the combined sun and sky, D65. Now that you raised the topic for reexamination, though, I see that the choice of D65EFFICACY was not a good one because the efficacy of the sky should be lower than white light due to its bluish tint, not higher. The correct sky efficacy should be something like 162, which when combined with an RGB color of .8 .9 1.3 would yeild approximately the correct result. Since I don't know what it really should be, you can adjust your RGB color to 1 1.126 1.626 and you should get the right luminances, anyway. I should change this stuff. I only wish I had some decent references myself. b) Dayfact output I do not know where the factor of 1/4 could be coming from. Please send me a more detailed description of this problem, using "getinfo" to send me the headers of the resulting renegade pictures. c) Changes to dayfact I have modified the script px/falsecolor.csh to put the contour lines on the values rather than between. I should have done it this way to begin with. You only need to change the following line from: boundary(a,b) : neq(floor(ndivs*a),floor(ndivs*b)); to: boundary(a,b) : neq(floor(ndivs*a+.5),floor(ndivs*b+.5)); Then copy the file to falsecolor in your executable directory. -Greg ================================================================= LUMINOUS_EFFICACY Date: Wed, 29 Jan 92 07:42:35 EST From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: gjward@lbl.gov Subject: Differences in Radiance v2.0? Greg, we've been testing Radiance v2.0 (the scanline-fixed version) and have come up with some differences in how images look. In specific, light sources seem to be brighter in version 2.0 than in version 1.4. I can upload a pair of images, both calculated with the same script, one under 1.4 and one under 2.0, if you'd like to look at them, to get a better idea of what I mean. I honestly don't know if it is an enhancement (read: the 1.4 method wasn't correct) or if it is a bug (read: the 2.0 method isn't correct) so I'd like to understand why these images are different. Stephen N. Spencer | Ride Bike! ,__o ACCAD - The Ohio State University | _-\_<, 1224 Kinnear Road Columbus OH 43212 | Indigo Girls Mailing List: (*)/'(*) spencer@cgrg.ohio-state.edu | indigo-girls-request@cgrg.ohio-state.edu "Usenet is like Tetris for people who still remember how to read." Date: Wed, 29 Jan 92 09:11:40 PST From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: Differences in Radiance v2.0? Hi Steve, Before you upload your images and scripts, let me suggest a possible cause and you can decide if this is what's happening. Between version 1.4 and 2.0, I corrected the value for the luminous efficacy of white light from 470 lumens/watt down to 179 lumens/watt. This value is used at two points in a normal calculation. First, to convert light fixture lumens to watts in ies2rad or a manual conversion process. Second, to convert watts back to lumens in ximage or similar programs that display luminance values. Since the conversion value is used in a reciprical fashion in these two steps, the overall effect is null as long as the same value is used each time. However, if you are judging by picture "brightness", ie. exposure value and screen brightness, using the same initial IES file as version 1.4 of Radiance, you will see a difference because version 2.0 uses a smaller value for luminous efficacy and thus produces brighter images. The new version is more correct. You can read about this change a bit more in the ReleaseNotes file in the ray/doc/notes subdirectory of the distribution. -Greg =================================================================== RPICT_PARAMETERS Date: Thu, 30 Jan 92 11:41:57 PST From: greg (Gregory J. Ward) FAXnumber: 01149866933541 FAXfrom: "Greg Ward, Lighting Research, LBL, (510) 486-4757, fax 4089" FAXto: "Martin Mock, ABT ASIV43/LIZ" Hi Martin, Here goes with my best shot at explaining the rpict parameters. The "min" value gives the fastest, crudest rendering. It is not necessarily the smallest value numerically. The "fast" value gives a reasonably fast rendering. The "accur" value gives a reasonably accurate rendering. The "max" value gives the ultimate in accuracy. Param Description Min Fast Accur Max Notes ===== ==================== ===== ===== ===== ===== ===== -sp pixel sampling rate 16 8 4 1 -st sampling threshold 1 .15 .05 0 -sj anti-aliasing jitter 0 .6 .9 1 A -dj source jitter 0 0 .7 1 B -ds source substructuring 0 .5 .15 .02 -dt direct thresholding 1 .5 .05 0 -dc direct certainty 0 .25 .5 1 NOTES: A) This option does not affect the rendering time B) This option adversely affects image sampling (ie. use -sp 1) In the next version of Radiance (2.1), the options -sp, -sj and -st will be renamed -ps, -pj and -pt, respectively. This is to avoid conflict with some new options that will be added for sampling semi-specular reflections. In general, jittering is a way to reduce image artifacts by introducing Monte Carlo sampling into the rendering process. This technique was introduced by Rob Cook in his landmark paper on "Distributed Ray Tracing" in the 1984 Siggraph proceedings. If you want more information on the sampling techniques used in the direct lighting calculation, you should read the paper I wrote for the 1991 Eurographics Rendering Workshop. I have sent you a copy. Hope this helps explain things a little. -Greg ======================================================================== GENSURF To: greg@hobbes.lbl.gov Date: Fri, 31 Jan 92 8:38:23 CST From: scoggins@mc1.wes.army.mil Hi Greg: I would be interested in trying out you new gensurf. Jerry Ballard and I are working on rendering outside scenes using measured terrain elevation data. We have been using Radiance and some code Jerry wrote to create triangular and rectangular polygons. However the code is not as flexable as you new gensurf, I'm sure. While your on the line, I would like to ask you a couple of questions. I am using Radiance to make images of painted surfaces; panels, cylinders, etc. These are included in backgrounds of terrain and trees and used to get an idea about how well camouflage paints work. One of the primary interest is in what are termed gloss levels of the paints. I think this is the degree of specularity. Anyway, the people who make the measurements record the 'glossyness' as a number from 1 to 1000 using a reflectometer with fixed angles. They use black glass as a standard which is defined as a gloss level of 1000. Do you have any thoughts on how to relate specularity and roughness to gloss level, or any literature references that might be helpful ? I have been doing the work using extremes of specularity and roughness to simulate paints at different ends of the gloss level spectrum. Another application of Radiance. I'm also working to try and simulate surface temperatures of outdoor stuff, landscapes, trees, etc., using models for conduction and surface energy flux. One of the biggest components of the surface energy flux is solar radiation, both direct and ambient. I would like to do this for 3-D surface descriptions of the objects. I plan to use Radiance to calculate the irradiance at sample points on the surface of the ground and trees and other things. I have made a small modification to rtrace so that I can call it from a C program with location and surface orientation as arguments and total irradiance as returned value. I guess this is not so much of a question as a statement, but I thought you might have some comments on this. Thanks for the note on gensurf. Jerry may have already sent you a request and if so I can just get a copy from him. Your software has really been a boost to our work around here and you sure can't beat the price. Look forward to your new releases. Bye for now. One last thing, in the off-the-wall category, do you know of any ftp sources for L-systems software ? Randy Scoggins US Army Engineer Waterways Experiment Station scoggins@mc1.wes.army.mil Date: Fri, 31 Jan 92 09:47:17 PST From: greg (Gregory J. Ward) To: scoggins@mc1.wes.army.mil Subject: gensurf and misc. Hi Randy, I sent Jerrell a copy of the new gensurf program. I actually thought of you folks right away as possible testers, but wasn't sure if you had the time. I have only one reference on gloss and how it is measured, and I'm not sure how to relate it to any physical surface parameters. The measurement technique is strictly relative, and doesn't really correlate to anything but itself. The fact that it combines overall specular reflectance with polish is a real problem. In effect, you have a single measurement where two are required at the very least. If you can make some assumptions about the index of refraction of the material involved, then you may be able to back this out since specular reflectance can be calculated from Fresnel's law. What you will discover is that non-metals never have a specularity greater than .05 or so. Calling rtrace from a program is quite useful, and I do it in several of my programs. You can check out the module ray/src/util/glareval.c and also ray/src/gen/mkillum2.c. I use the communication routines in ray/src/common/process.c to connect to rtrace via dual pipes, so I don't have to have a separate compiled version of the program. As for L-systems, I only know of the Mac program made available by Paul Bourke of New Zealand. You can pick it up from the pub/mac directory on hobbes.lbl.gov (128.3.12.38) via anonymous ftp. I don't know about source code, but you might try contacting Paul directly. His e-mail is pdbourke@ccu1.aukuni.ac.nz. Let me know if you have any success with gensurf! -Greg Date: Tue, 4 Feb 92 13:59:49 PST From: Chris Toshok To: greg@hobbes.lbl.gov Hi Greg. I am working on tracing Newell's teapot using radiance (it would make a very interesting object) but am have trouble understanding how to implement bezier patches using gensurf. I have all the controls points for the patches, and they are cubic, which is what gensurf uses (i hope), but I can't figure out what the five values gensurf wants for each bezier curve. I have mapped out all the control points onto 4x4 grids which I was going to use, but not all the coordinates have the same x,y, or z values. How can I generate a patch with gensurf when only 5 values can be given for each function x,y and z. Is gensurf capable of producing representations of three dimensional cubic bezier patches? If not, I'll have to write one, and although I am up to the task, I would much rather use gensurf. Help.... Chris Date: Tue, 4 Feb 92 20:40:03 PST From: greg (Gregory J. Ward) To: toshok901@snake.cs.uidaho.edu Subject: The Teapot Hi Chris, The bezier function defined by gensurf is merely the 1-dimensional Bezier polynomial. It is up to you, the user, to make it into a 2-dimensional patch and give it the control points. This is not too difficult to do, provided that you know something about the language that gensurf (and many other programs in Radiance) use. Unfortunately, I haven't documented the language very well, so here are some pointers. Start with the following file to define a 3-dimensional bicubic Bezier surface in terms of the 1-dimensional Bezier polynomial: :::::::::: bezier.cal :::::::::: { Bicubic Bezier Patch 02Mar90 Define Px(i,j), Py(i,j), Pz(i,j) } x(s,t) = bezier(P2x(s,1), P2x(s,2), P2x(s,3), P2x(s,4), t); y(s,t) = bezier(P2y(s,1), P2y(s,2), P2y(s,3), P2y(s,4), t); z(s,t) = bezier(P2z(s,1), P2z(s,2), P2z(s,3), P2z(s,4), t); P2x(s,j) = bezier(Px(1,j), Px(2,j), Px(3,j), Px(4,j), s); P2y(s,j) = bezier(Py(1,j), Py(2,j), Py(3,j), Py(4,j), s); P2z(s,j) = bezier(Pz(1,j), Pz(2,j), Pz(3,j), Pz(4,j), s); { I have commented out the definition of the Bezier polynomial below, since it is defined internally by gensurf and executes a little faster there. } { bezier(p1, p2, p3, p4, t) = p1 * (1+t*(-3+t*(3-t))) + p2 * 3*t*(1+t*(-2+t)) + p3 * 3*t*t*(1-t) + p4 * t*t*t ; } _EOF_ Then, you must create a separate file for each bicubic patch on the teapot, using the following format: :::::::::: patchN.cal :::::::::: { Bicubic Bezier patch number N } Px(i,j) = select((i-1)*4+j, { first index major, second minor } 3.51, 89.218, 15.38, 17.38, 5.81, 83.11, 19.635, 14.91, 6.38, 75.83, 25.183, 18.18, 7.91, 70.31, 22.83, 19.83 ); Py(i,j) = select((i-1)*4+j, { 16 more Bezier points } ); Py(i,j) = select((i-1)*4+j, { and another 16 } ); _EOF_ You'll notice that it is necessary to manipulate the data in order to get the points in a form that can be easily digested by gensurf. A short C program should do the trick. Once you have all your patch files together, you can create the actual Radiance input file for the teapot, which will look something like this: :::::::::: teapot.rad :::::::::: # # The (in)famous Utah Teapot # void metal copper 0 0 5 .8 .5 .02 .9 0 !gensurf copper patch1 'x(s,t)' 'y(s,t)' 'z(s,t)' 8 8 -s \ -f bezier.cal -f patch1.cal !gensurf copper patch2 'x(s,t)' 'y(s,t)' 'z(s,t)' 8 8 -s \ -f bezier.cal -f patch2.cal !gensurf copper patch3 'x(s,t)' 'y(s,t)' 'z(s,t)' 8 8 -s \ -f bezier.cal -f patch3.cal # and so on... _EOF_ The -s option to gensurf will create smoother-looking patches by using Phong surface normal interpolation. You may also want to create this file using a C program. Let me know if I can be of any further assistance. -Greg ================================================================== ALIASING From: Nick (Nikolaos) C. Fotis Subject: Various To: gjward@lbl.gov (Greg Ward) Date: Mon, 10 Feb 92 3:56:26 EET Dear Mr. Ward, as I finished the last exam for the semester, I anxiously proceeded to our beloved (except that awful keyboard!) Snake to test your code. I tried the H-P oriented malloc(). It seems to work fine! Don't know about speed, though. It's faster than the old way? I hope to test the same malloc.c with the DECstations we have here. (And the new gensurf has compiled ok, but I still don't know how to use it! ie. How I supply the height field data to this module??) a. I had a small problem: nfotis@kentayros 10:43pm 114 /usr/tmp/nfotis/ray/obj/office > make oconv model.b90 desk misc > modelb.oct xform: (cornerdesk.norm): bad brightfunc "ygrain" oconv: fatal - (!xform -e -s 4 -ry 90 -t -28 15.5 28 cornerdesk.norm): bad arguments for brightfunc "ygrain" *** Error code 1 ---- I had to change the relative line in cornerdesk.norm, from 2 ygrain woodpat.cal -s .05 to 4 ygrain woodpat.cal -s .05 --- and everything seemed to work OK here (I suppose) (your wood texture seems better than the previous - I may be wrong of course!) b. I feel uneasy about the texture example. In particular, the text is not very clean, and I feel that it has to do with the way your code samples intensities across surfaces (I don't really know, since I just use the system). Also the orange ball seems rather strange. Perhaps I should send you a UUencoded image?? c. About oversampling and then postfiltering the results: The idea is rather sound, but some hard spots remain, like venetian blinds. Perhaps the ray-tracer could get a jittered sampling option? (These blinds tend to show some VERY annoying staircase-like patterns, even if I use pfilt with -r 0.7 and slash by 3 the resolution of the original image :-( ) Another trouble spot seems to be the text rendering in general. Maybe I'll try to transfer to PAL video tape the filtered images (or at least to see them on a 24-bit device!) d. I'm constructing a Frequently Asked Questions message for the comp.graphics USENET newsgroup, and I would like to include a 1-2 paragraph description of the system. Here's the present description: RADIANCE 2.0: ------------ In a short sentence, It's a ray-tracer with radiosity effects. I'm using it on a HP 9000/720, and it's different from the rest (If you've seen radiosity-generated images, you know what I mean) Clearly, this doesn't do justice to your program! ;-) e. I would like for a copy of your theoretic works (from the tail of the ray.1 manual): Ward, G., ``Adaptive Shadow Testing for Ray Tracing,'' Second Annual Eurographics Workshop on Rendering, to be pub- lished by Springer-Verlag, May 1991. Ward, G., ``Visualization,'' Lighting Design and Applica- tion, Vol. 20, No. 6, June 1990. Ward, G., F. Rubinstein, R. Clear, ``A Ray Tracing Solution for Diffuse Interreflection,'' Computer Graphics, Vol. 22, No. 4, August 1988. Ward, G., F. Rubinstein, ``A New Technique for Computer Simulation of Illuminated Spaces,'' Journal of the Illuminating Engineering Society, Vol. 17, No. 1, Winter 1988. I would be grateful if you could send me a copy. Greetings, Nick. -- Nikolaos Fotis National Technical Univ. of Athens, Greece 16 Esperidon St., UUCP: mcsun!ariadne!theseas!nfotis Halandri, GR - 152 32 or InterNet : nfotis@theseas.ntua.gr Athens, GREECE FAX: (+30 1) 77 84 578 Date: Mon, 10 Feb 92 09:43:08 PST From: greg (Gregory J. Ward) To: nfotis@theseas.ntua.gr Subject: Re: Various Hi Nick, The new version of malloc should be as fast or faster than anyone else's implementation, with more efficient memory use for my programs. Try following the example at the end of the new gensurf man page to generate a height field. If it is still confusing to you, I would be happy to answer specific questions. a. Yes, this problem was the result of a careless change I made when going over to the new woodgrain. You fixed it correctly, and the fix has been included in the pub/patch directory on hobbes. b. The orange ball in the texture example is meant to appear as an orange. There is a 1 Mbyte image of what the finished scene should look like called pub/pics/textures.pic. The orange is an orange, so it has texture and maybe that's what looks strange to you. The text needs to be rendered at a high resolution to come out right, and you may have to set the pixel sampling rate (-sp) to 1. c. The default pixel jittering (-sj) is .67. You may increase it if you like as high as 1 (full pixel jittering). A setting of -sj 0 would mean no pixel jittering. As for the staircases you see on venetian blinds, this may be a result of the floating point color images more than anything else. Most software clips the high end of an image as its written out, before anti-aliasing is performed. Because Radiance endeavors to represent the real values involved, where there is extreme contrast the anti-aliasing is less effective. Imagine you have the following boundary in a 3x3 pixel section of your image, representing pixels brightnesses (rather than colors) as floating point values: .361 .365 .380 .353 .358 1082. .345 1085. 1090. In a standard approach, these values would be clipped before the filtering takes place, ie: .361 .365 .380 .353 .358 1.00 .345 1.00 1.00 The pixels would then be averaged together (assuming 3x3 box filtering) to a single pixel value, namely .574. In Radiance, however, no such clipping takes place, and the correct average of 362 is computed. To display this value, we must necessarily clip, but at least we clip to one instead of the erroneous value of .574. Also, the resulting filtered image can be scaled in brightness and the result will still be correct -- not true in the clipped-then-averaged case. However, there is a drawback to using correct math in our calculations. Look at the above pattern of pixels. What if the pixel sample in the upper right had landed on the brighter surface rather than the darker surface? This is quite possible when using jittered sampling. Our value may then have been around 1000 rather than .380, and our correct average would jump from being 362 to 473. Imagine another case where just one pixel in our 3x3 grid is that bright -- this amounts to a huge source of uncertainty in the final value. With the incorrect clip-then-average scheme, such large pixel values are never a problem because the result is always clipped to a value less than one. In short, if a smooth image is more important to you than a correct one, you can take the original high-resolution image out of rpict, convert it to some 24-bit image type (like TIFF or TARGA), and read it into another program such as Adobe's Photoshop to perform the anti-aliasing on the clipped image. If you don't have Photoshop, then I can show you how to do it with pcomb, but it's much slower. As for text rendering, the problem is probably that you need to increase the pixel sampling rate as mentioned before in order to correctly resolve the text. Set -sp to 1 and see if that doesn't solve your problem. By the way, text looks better without pixel jittering (-sj 0)! d. Yes, that description does seem a bit terse. Please send me your description before sending it out, and thanks! e. The papers are on the way. Regarding the animation, it is only camera motion and I picked the keyframes with rview. -Greg From: Nick (Nikolaos) C. Fotis Subject: Re: Various To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Wed, 12 Feb 92 5:18:01 EET > Hi Nick, > > Try following the example at the end of the new gensurf man page to > generate a height field. If it is still confusing to you, I would be > happy to answer specific questions. (blush) I had deleted the previous gensurf you sent after the announcement of the erroneous part you sent(I had not the time to try it). Did you include a manual for gensurf in you first post?? Please send it to me again... > b. The orange ball in the texture example is meant to appear as an orange. > There is a 1 Mbyte image of what the finished scene should look like called > pub/pics/textures.pic. The orange is an orange, so it has texture and maybe > that's what looks strange to you. The text needs to be rendered at a high > resolution to come out right, and you may have to set the pixel sampling > rate (-sp) to 1. It would be nice to have compressed these files, because we're on the Internet via a Trailbazer+ modem at 9600 (or 19200?) bps. It could make my life much easier.... In any case, when I tried to display the image, I got an ximage: fseek error Do your picture files are byte order indepedent? I fear the answer is no :-( In any case, I re-rendered the scene with the parameters that you had inside the image's header (the new rpict didn't like the -dp .9 parameter, so I just left it out..) The high-resolution pic was great, so I got assured that there was not some flaw to the system (It would't be good for you, no?? ;-) ) The trouble spot (or artifact, if you prefer) was the extremely high contrast of the base surface (white?) with the orange surface that gave to the orange ball a rather odd appearance. Perhaps you should use something like Sun's XDR?? I heard that many Unix boxes include these libraries, but I would prefer something like the free BRL-CAD's library for network file order and reading/writing these data. You could include it on the Radiance distribution, making it more immune from changes to its environment. > c. The default pixel jittering (-sj) is .67. You may increase it if you > like as high as 1 (full pixel jittering). A setting of -sj 0 would mean Ahh, I should have RTFMed before I ask such a question!! > no pixel jittering. As for the staircases you see on venetian blinds, > this may be a result of the floating point color images more than anything > else. Most software clips the high end of an image as its written out, > before anti-aliasing is performed. Because Radiance endeavors to represent > the real values involved, where there is extreme contrast the anti-aliasing > is less effective. Imagine you have the following boundary in a 3x3 pixel > section of your image, representing pixels brightnesses (rather than colors) > as floating point values: Hmmmm... I know that the human eye is not linearly sensitive to the various brightness levels you present to it. You could do (perhaps) a logarithmic hack to scale the values to the range 0..255?? But given the VERY wide range possible with FP numbers, I feel it's at best a challenge... [etc] > However, there is a drawback to using correct math in our calculations. > Look at the above pattern of pixels. What if the pixel sample in the upper > right had landed on the brighter surface rather than the darker surface? > This is quite possible when using jittered sampling. Our value may then > have been around 1000 rather than .380, and our correct average would > jump from being 362 to 473. Imagine another case where just one pixel > in our 3x3 grid is that bright -- this amounts to a huge source of > uncertainty in the final value. With the incorrect clip-then-average And I thought that a 3-fold increase in dimensions plus gaussian filtering would take good care of all these problems... Going to 4-fold sizes is a VERY expensive proposition (and again, it may be not enough) > scheme, such large pixel values are never a problem because the result > is always clipped to a value less than one. > > In short, if a smooth image is more important to you than a correct one, At this time, I'm interested in artistically "correct" images. When I take the Photometry course, of course my interests will be the other way around... (Your programs made me thinking about it. Seriously!) > you can take the original high-resolution image out of rpict, convert it > to some 24-bit image type (like TIFF or TARGA), and read it into another > program such as Adobe's Photoshop to perform the anti-aliasing on the > clipped image. If you don't have Photoshop, then I can show you how to > do it with pcomb, but it's much slower. Since we don't have Macintoshes lying around here, I'm forced to the UNIX route (not that I feel bad about it! ;-) ). And the file transfer to a Mac is not that fast (ie.sneakernet). What's your recipe about pcomb?? > As for text rendering, the problem is probably that you need to increase > the pixel sampling rate as mentioned before in order to correctly resolve > the text. Set -sp to 1 and see if that doesn't solve your problem. By > the way, text looks better without pixel jittering (-sj 0)! I had done a 512x512 rendering, and it didn't seem enough. I'll try it later (when your test anim finishes! For the anim, I changed the parameters, so I hope to get PAL-sized images. I filter down to 760x578, the overscan PAL resolution - If I remember correctly the numbers. I put a horizontal field of view of 50 degrees. Does the vertical fov changes accordingly??? or I have to set this also?) > d. Yes, that description does seem a bit terse. Please send me your > description before sending it out, and thanks! See above. I think there's no problem... > > e. The papers are on the way. Many thanks!! > Regarding the animation, it is only camera motion and I picked the > keyframes with rview. OH! And you got the inbetween numbers with rpict. Correct?? (I had not seen this potential inside the manuals. Perhaps you should emphasize it in another section? In the other side, I usually play with Radiance somewhat strange hours of the day... And so I'm less than bright when I look at the manuals) Greetings, Nick. Date: Wed, 12 Feb 92 10:50:35 PST From: greg (Gregory J. Ward) To: nfotis@theseas.ntua.gr Subject: Re: Various Hi Nick, Did you remember to set binary mode before ftp'ing the image? The Radiance picture format is byte-order independent, and should read correctly when transferred between machines. I'm sorry that I didn't compress the image beforehand. All of those images should be kept compressed. I will do that now. The correct scaling of images for viewing is an open research topic. A recent paper by Tumblin and Rushmeier suggested the following brightness mapping: { Mapping of Luminance to Brightness for CRT display. Hand this file to pcomb(1) with the -f option. The picture file should have been run previously through the automatic exposure procedure of pfilt(1), and pcomb should also be given -o option. Like so: pfilt input.pic | pcomb -f tumblin.cal -o - > output.pic If you are using pcomb from Radiance 1.4, you will have run without pfilt and set the AL constant manually. If you are using a pcomb version before 1.4, you will have to do this plus change all the colons ':' to equals '=' and wait a lot longer for your results. Formulas adapted from Stevens by Tumblin and Rushmeier. 28 August 1991 } PI : 3.14159265358979323846; { Hmm, looks familiar... } LAMBERTS : 1e4/PI/179; { Number of watts/sr/m2 in a Lambert } DL : .027; { Maximum display luminance (Lamberts) } AL : .5/le(1)*10^.84/LAMBERTS; { Adaptation luminance (from exposure) } sq(x) : x*x; aa(v) : .4*log10(v) + 2.92; bb(v) : -.4*sq(log10(v)) + -2.584*log10(v) + 2.0208; mult = li(1)^(aa(AL)/aa(DL)-1) * ( 10^((bb(AL)-bb(DL))/aa(DL)) / DL ); ro = mult*ri(1); go = mult*gi(1); bo = mult*bi(1); -------------------------------- You can apply this directly with pcomb as shown in the example. If you want to clip the images prior to anti-aliasing reduction with pfilt, just apply a function such as 'clip(x)=if(x-1,1,x)' using pcomb, ie: % pcomb -e 'ro=clip(ri(1));go=clip(gi(1));bo=clip(bi(1))' \ -e 'clip(x)=if(x-1,1,x)' adjusted.pic \ | pfilt -x /3 -y /3 -1 -r .67 > final.pic Note that "adjusted.pic" must already be set to the desired exposure level with a previous run of pfilt. Alternatively, if you know the correct exposure scaling, you can set it with a "-s expval" option to pcomb immediately before the original picture. Regarding your changes to and problems with the animation script, perhaps you could send it to me. The vertical field of view is not altered by the horizontal setting. Rather, the image height or width is adjusted down to insure that the specified pixel aspect ratio (if non-zero) is met. If the aspect ratio (-p option) is set to zero, then you will get exactly what you ask for in terms of x and y image resolution. The inbetween frame position are actually calculated by rcalc using the Catmull-Rolm spline algorithm in spline.cal. None of this is well- documented as I have never gotten around to making a nice walkthrough animation executor. I believe Paul Bourke and the folks in New Zealand may be working on one. (pdbourke@ccu1.aukuni.ac.nz) -Greg ============================================================= SHARED_PICTURES From: Alexander Keith Barber Subject: Rice U Renderings To: greg@hobbes.lbl.gov Date: Tue, 11 Feb 92 17:20:07 CST Greg - I just uploaded 3 pictures along with a README in a tar.Z file to the xfer directory of hobbes. I hope people will pull them down and view them; I would like people to see what sort of large scale renderings are possible with Radiance. Perhaps you could tell people on the mailing list about these pictures? Despite the fact that the .rad file used for these .pics is several megs, as is the octree, the longest rendering time was one and a half hours. The shortest time was a quarter hour for a 512x512 version of the large aerial view included here. I just love the processing speed of a Unix platform. I hate to think how long these would take in Autocad... I've seen _hidden line_ drawings alone take hours... Alex Barber barber@comet.rice.edu Date: Tue, 11 Feb 92 17:38:45 PST From: greg (Gregory J. Ward) To: barber@ravl.rice.edu Subject: Re: Rice U Renderings Hi Alex, Thanks for your contributions. I would like to encourage more such contributions from others, but I'm not sure I have enough disk space to hold them. Generally, I only ask people who want to share to share the .rad files in compressed format, since these usually take up less space. In your case, though, I'm not so sure! I'm curious why you didn't use the rendering capabilities built into AES? I was wondering also if you and Dwayne might be willing to share your file converter with the rest of the world? I am eager to take a look at your pictures myself. Unfortunately, I'm at home now and output on a dot matrix printer just doesn't cut it. I'm glad that you have had such success with Radiance. It's true of any good ray tracing program that it will be faster with large models than any object rendering technique such as z-buffer or hidden line algorithms. You should get ahold of version 2.0, though. It really is much better than previous releases in a number of ways. Version 2.1 will even be able to render models twice as large in the same amount of memory (with some loss in geometric accuracy). -Greg From: Alexander Keith Barber Subject: Re: Rice U Renderings To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Wed, 12 Feb 92 9:48:39 CST Greg - In reply to: >I'm curious why you didn't use the rendering capabilities built into AES? >I was wondering also if you and Dwayne might be willing to share your >file converter with the rest of the world? I must say that while we have had no problems making hidden line and hidden surface pictures, using surface definitions of our own in AES, as well as a light source for a sun, there are still a few problems creating raytraced output with AES. In other words, I have no output that makes any sense. No matter what I try, I end up with a light source too bright, too dark, too focused like a spotlight... Despite following the tutorial and manual to the letter, I have nothing but a a couple meg of various small pictures that are rather fun for the mood they create (a spotlight against a building in the dark, creating purple shadows) but that have nothing to do with realism. Hence our use of Radiance to do renderings... >You should get ahold of version 2.0, though. That is what we have been using since you released it. We still have some problems with Radiance and surface defs, tho'. I'll send you a few pictures - small ones - that illustrate our difficulties. I'll render a building I've created, using just gensky for light (no ground glow since I have a large plane I use as grass or concrete or whatever). The resultant picture is very nice, but I'll get a blue color thrown on the building from the sky, along with the green of the ground... I'll send you details later this week. Alex Barber barber@comet.rice.edu Date: Wed, 12 Feb 92 10:21:42 PST From: greg (Gregory J. Ward) To: barber@ravl.rice.edu Subject: Re: Rice U Renderings Hi Alex, I finally got a look at your images. Very nice. I have shrunk the first one to 512x512 using pfilt and put it in the pub/pics directory for sharing along with your README file. I misread the header on your images and concluded (wrongly) that you were still using an earlier version. Sorry. Your sky does look a little too blue to me, and that may be why you get more color bleeding than you would like on your renderings. If you upload them, I will be able to say for sure. You may still want to create a ground source for your scene, so that the horizon does not appear black. -Greg ==================================================================== AMIGA_PORT Date: Thu, 13 Feb 92 17:12:41 +0100 From: bojsen@id.dth.dk (Per Bojsen) To: greg@hobbes.lbl.gov Subject: Amiga port of Radiance 2.0, beta version Hi Greg, I've uploaded a beta version of the Amiga port of Radiance in the /pub/ports/amiga directory. The archive is in the `lharc' format. `Lharc' is a popular archiving/compressing utility, which is available for the IBM PC, Amiga, and UNIX. Is it OK, that I use this format? The archive contains binaries, the library files, the example objects/models, and documentation. No source. When I release the final version of the port I'll upload a diff of the sources as I believe you requested. -- Per Bojsen The Design Automation Group Email: bojsen@ithil.id.dth.dk MoDAG Technical University of Denmark pb@id.dth.dk Date: Thu, 13 Feb 92 08:33:59 PST From: greg (Gregory J. Ward) To: bojsen@id.dth.dk Subject: Re: Amiga port of Radiance 2.0, beta version Hi Per, Thank you for uploading the port. Does this lharc utility come standard with the Amiga? If not, can you legally upload a program to unarchive the files? If not, than it might be better to use the archiving utility I wrote, which can be freely distributed. I suspect that it has close to the same utility as lharc, though I know nothing of this program. I have ported my file archiving/compression programs (pkg and unpkg) to UNIX, MS-DOS and C/PM. I do not have a version for the MacIntosh or the Amiga, though I suspect porting it to the Amiga would be a snap. I'll be happy to provide you with the source code if you need it. Thanks again! -Greg =================================================================== DECSTATION Date: Thu, 13 Feb 1992 13:15 EDT From: RCBI110@MARSHALL.MU.WVNET.EDU Subject: Re: Radiance install question To: greg@hobbes.lbl.gov Greg, Somehow I magically got all the programs, and I don't know how I did it...:^) Today's question: error message: rview: Cannot open command line window I get a nice blank black window for about 2 seconds then it stops with this error message... If it matters, it's on a brand new DECsystem 5000/200. Fresh out of the box, almost :^) Any suggestions?? Alan Date: Thu, 13 Feb 92 10:23:59 PST From: greg (Gregory J. Ward) To: RCBI110@MARSHALL.MU.WVNET.EDU Subject: Re: Radiance install question Yes, I think that other DECstation users have had similar problems, because DEC for some reason does not see fit to distributing the standard X11 fonts with its system. You must reset the COMFN macro in ray/src/rt/x11.c to a font that is supported on your system. I don't know what that would be, but there's got to be something. Also, you will have to change COMCW and COMCH while you are at it. Likewise, you should make a similar change to the FONTNAME macro in ray/src/px/x11image.c and ray/src/util/xglaresrc.c. I hope this solves your problem! -Greg Date: 20 Feb 92 15:00:00 PST From: "Jack Hsiung" Subject: Re: Problem making obj/office with Radiance 2.0 To: "greg" I followed your advice and changed the 8x13 font in src/rt/x11.c and src/px/x11image.c to fixed (works for other X windows programs on this DECstation). Now rview and ximage are able to display the images. However, it seems that images displayed by ximage have their red and blue channels switched. For example, the reddish looking wood looks blue. rview displays the colors fine. Any idea what can be causing this? Jack Date: Thu, 20 Feb 92 15:04:23 PST From: greg (Gregory J. Ward) To: cvetp035@CSUPomona.Edu Subject: Re: Problem making obj/office with Radiance 2.0 Hi Jack, Do you have a 24-bit X11 server? There doesn't seem to be much of agreement on how these beasties are supposed to work. I don't have one myself, so it is difficult for me to debug from here. If your X11 server is only 8-bit, then we're in a heap of trouble! Couldn't you find some way to get it to find the more standard font? -Greg Date: 20 Feb 92 15:21:00 PST From: "Jack Hsiung" Subject: Re: Problem making obj/office with Radiance 2.0 To: "greg" I know the display is 24-bit and the server is DEC's own, which I think is 24-bit (The colors in the demo animation blends very smoothly). Is it possible to go into the code and switch the red and blue when ximage reads an image? I'll try to figure out how to get the 8x13 font to work. Jack Date: Thu, 20 Feb 92 15:37:17 PST From: greg (Gregory J. Ward) To: cvetp035@CSUPomona.Edu Subject: Re: Problem making obj/office with Radiance 2.0 Go to line 733 of x11image.c. There, you can reverse the ordering of those three statements and that should turn the trick. It's a bad hack, though, since the server should be doing this job in XImage(). ============================================================== INFRARED Date: Thu, 20 Feb 92 15:58:12 +0100 From: manolesc@cethil.univ-lyon1.fr Subject: Infrared radiance To: greg@hobbes.lbl.gov Hi, Greg ! Here I am again bothering you with questions about infrared radiance. I am not sure to make a good connection of the radiance curve with the rgb va lues. So : 1 In wich file do you fix the 3 representative wavelengths rgb emploied by RADIANCE ? 2 The units of mesure for the rgb radiance values of the "light" material are Watts/rad2/m2 or Watts/sr/m2 ? If Watts/rad2/m2 how do you calculate "rgb" knowing the 3 wavelengths fixed on the 1st point ? 3 The "l" comand in XIMAGE displays the luminance value in the area of interest. In what unit of mesure is it displayed ? Thanks a lot for everything, Mircea. Mircea Manolescu INSA Lyon- Batiment 307 21, Av. Albert Einstein 69126 Villeurbanne E-mail: manolesc@cethil Date: Thu, 20 Feb 92 08:53:34 PST From: greg (Gregory J. Ward) To: manolesc@cethil.univ-lyon1.fr Subject: Re: Infrared radiance Hello Mircea, I will try to answer your questions as best I can. 1 In wich file do you fix the 3 representative wavelengths rgb emploied by RADIANCE ? As I think I said before, there are no specific wavelengths employed by Radiance for red, green and blue. Those three channels can mean whatever you want them to mean, and they are treated identically throughout the calculation. The only time any assumption is made about them is by ies2rad to compute a lamp color or ximage to compute luminance. In most applications, these primaries are chosen to correspond to a standard computer monitor, but this may be totally inappropriate in your application. 2 The units of mesure for the rgb radiance values of the "light" material are Watts/rad2/m2 or Watts/sr/m2 ? If Watts/rad2/m2 how do you calculate "rgb" knowing the 3 wavelengths fixed on the 1st point ? I am afraid I don't know what you mean by "rad2" or how this differs from steradians. The units for light sources are Watts/sr/m2/spectrum, where "spectrum" is the totality of wavelengths in which you are interested. In other words, the value for a particular channel for a light source is given as the total Watts/sr/m2 that source would have over the spectrum if it emitted uniform white light at that level. The reason for giving such values rather than the more usual Watts/sr/m2/channel is so the values are independent of the wavebands selected for the channels. A value of "1 1 1" will always mean uniform white light over the desired spectrum with a total radiance of 1 Watt/sr/m2. 3 The "l" comand in XIMAGE displays the luminance value in the area of interest. In what unit of mesure is it displayed ? The unit of luminance used is candelas/m2 (lumens/sr/m2), and the conversion factor from Watts/sr/m2 is 179, which is the luminous efficacy of white light over the visible spectrum (380nm to 780nm). This is the default efficacy used by Radiance, although again you may find it to be inappropriate for your needs. I hope this helps. Please don't hesitate to ask any further questions you might have. -Greg =========================================================== SPECULARITY_BUG Date: Thu, 16 Apr 92 08:12:50 EDT From: spencer@cgrg.ohio-state.edu (Stephen Spencer) To: greg@hobbes.lbl.gov Subject: Can you help? One of our users has a well-documented case of version 1.4 producing far better results when compared to version 2.0 of the RADIANCE software. I've uploaded a file, "forWard.tar.Z", into the pub/xfer area of your machine -- can you look at it for me/him? If you could, send the replies to spencer@cgrg.ohio-state.edu and to ksimon@cgrg.ohio-state.edu as he (Kevin) is the user in question. Thanks very much! steve Date: Thu, 16 Apr 92 12:28:59 PDT From: greg (Gregory J. Ward) To: ksimon@cgrg.ohio-state.edu, spencer@cgrg.ohio-state.edu Subject: Re: Can you help? Dear Steve and Kevin, I have looked at your images and your files and what you are seeing is the difference in the way Radiance 2.0 handles specular surfaces. Simply put, version 1.4 was not fully able to compute reflection from specular surfaces. Radiance 2.0 does a much better job. However, most of the surfaces in your room should be completely diffuse, yet the input file defines these materials as having a specularity between 1% and 10%. In my current model of specular surfaces, the degree of specularity increases near grazing angles. Thus, even a specularity of a few percent will increase close to 1 near grazing. That is what is causing the unusual bright areas at the base of your furniture, in the corners of the room, etc. It is also causing a "sheen" in your sofas that is quite unnatural. This gives me reason to reconsider my calculation of the specular component near grazing, to be sure. The lesson for your work is not to specify a specular component unless you really mean it! Most fabrics and wall coverings and paints are diffuse. Enamel paint, formica, and other plastic-like finishes may have some specularity, but never more than a few percent. Only metals have a significant specular component. Hope this helps. Thanks very much for bringing this to my attention! -Greg Date: Mon, 20 Apr 92 09:33:27 PDT From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: Can you help? Cc: ksimon@cgrg.ohio-state.edu Dear Steve (and Kevin), Actually, the problem you noticed in 2.0 was a rather serious bug in the normalization of the specular reflection from light sources. I am eternally grateful that you brought this to my attention, as I was just about to send in a Siggraph paper with the wrong formula!! Thanks to you, I won't have to publish a retraction and go around hanging my head low the rest of my days! I can send you the repaired files if you like, or you can wait for this and other bug fixes in version 2.1, due out soon. Thanks again! -Greg Date: Mon, 20 Apr 92 12:52:14 EDT From: spencer@cgrg.ohio-state.edu (Stephen Spencer) To: greg@hobbes.lbl.gov Cc: ksimon@cgrg.ohio-state.edu Subject: Can you help? Glad to hear that we've helped. How soon is "due out soon"? I think we can probably just wait... (Kevin, I'm not trying to speak for you here.) steve ============================================================= VIEW_INFO Date: Tue, 21 Apr 92 16:33:04 +0100 From: andre@borneo.inet.dkfz-heidelberg.de (Andre Schroeter) To: gjward@Csa1.lbl.gov Subject: radiance hallo, i just compiled radiance v2.0 on ISC2.2. all works fine exept xshowtrace and the *glare* programs. these programms only show an errormessage that they can't get the view from the pic. ximage beeps if i try to get information with the 't'command. maybe you know what's going wrong ??? thanks andre e-mail: andre@borneo.inet.dkfz-heidelberg.de 81239@rz.novell1.fht-mannheim.de Date: Tue, 21 Apr 92 09:09:32 PDT From: greg (Gregory J. Ward) To: andre@borneo.inet.dkfz-heidelberg.de Subject: Re: radiance The problem must be with your picture file. Run getinfo on it and send me the output. (Xshowtrace and glare only work with rpict and pfilt output.) -Greg To: greg@hobbes.lbl.gov From: "Andre Schroeter" <81239@novell1.rz.fht-mannheim.de> Date: 28 Apr 92 07:48:45 GMT+0100 Subject: RE: problem with view in picfile hallo, here is the output of getinfo. this picture is the fisheye view from the example in the tutorial manpage. fish.pic: /usr/andre/radiance/oconv sky.rad outside.rad mkiwindow.rad room.rad /usr/andre/radiance/rpict -vta -vp 1.5 .8 1 -vd 0 1 0 -vh 240 -vv 180 -av .5 .5 .5 SOFTWARE= RADIANCE 2.0 lastmod Thu Apr 16 21:43:22 MES 1992 by andre on andre FORMAT=32-bit_rle_rgbe thanks, andre e-mail: andre@borneo.inet.dkfz-heidelberg.de 81239@rz.novell1.fht-mannheim.de Date: Mon, 27 Apr 92 23:16:03 PDT From: greg (Gregory J. Ward) To: 81239@novell1.rz.fht-mannheim.de Subject: RE: problem with view in picfile Hi Andre, The problem is that you are using an explicit path to rpict (starting with a '/'), which ximage and xshowtrace do not know how to read. This is a bug that I really ought to fix... I will attempt to do so and send you a patch a little later. -Greg Date: Tue, 28 Apr 92 11:17:20 PDT From: greg (Gregory J. Ward) To: 81239@novell1.rz.fht-mannheim.de Subject: RE: problem with view in picfile Status: R I have made the changes, but it is probably better to wait for release 2.1 since I had to change several files. I will release 2.1 next month sometime. -Greg ======================================================= BACKGROUND_COLOR Date: Mon, 27 Apr 92 12:17:57 -0400 From: David Jones To: greg@hobbes.lbl.gov Subject: background color instead of black?? Hi Greg, I'm trying to generate some RADIANCE pics really quickly. I'd like to render an incomplete scene, and color any ray that does not hit an object as GREY instead of BLACK as rpict does by default. Is there any easy way to do this, or to massage the output of rpict? dj Date: Mon, 27 Apr 92 09:28:02 PDT From: greg (Gregory J. Ward) To: djones@lightning.McRCIM.McGill.EDU Subject: Re: background color instead of black?? Hi Dave, Just make a glow source with the desired color, like so: void glow background_color 0 0 4 .2 .3 .4 0 background_color source background 0 0 4 0 0 1 360 -Greg Date: Mon, 27 Apr 92 09:30:14 PDT From: greg (Gregory J. Ward) To: djones@lightning.McRCIM.McGill.EDU Subject: Re: background color instead of black?? You can also massage the output picture using pcompos: pcompos -b .2 .3 .4 -t 1e-4 input.pic 0 0 > output.pic Note that any pixel values less than 1e-4 will be replaced by the background color, so this is no good if you actually have black objects in your scene. =============================================================== UPFRONT_TRANSLATOR Date: Fri, 1 May 92 17:42:59 NZT From: pdbourke@ccu1.aukuni.ac.nz Subject: For your information To: GJWard@lbl.gov I have just written a model translator from Alias Upfront on the Macintosh to Radiance. It seems to work really well. Layers are colours are converted into materials. If anyone is interested then you can pass my address on otherwise I will eventually install a copy on your site. At the monent you only get 3 or 4 point facets (upfront limitation) but I intend to do cleaver tests and convert appropriate things into genbox and genprism calls. ------------------------------ Paul D. Bourke School of Architecture pdbourke@ccu1.aukuni.ac.nz Auckland University (130.216.1.5) Private Bag Ph: +64 -9 373 7999 x7367 Auckland Fax: +64 -9 373 7410 New Zealand ============================================================ SCENE_FLATTENING Date: Mon, 4 May 92 10:01:52 NZT From: pdbourke@ccu1.aukuni.ac.nz Subject: Radiance extraction To: GJWard@lbl.gov Is there a way to extract a decomposed description of a Radiance scene description, that is, a file containing just primitives such as polygons, spheres etc (no generators) I have tried various methods but have not found a way that doesn't require a possibly high user input. ------------------------------ Paul D. Bourke School of Architecture pdbourke@ccu1.aukuni.ac.nz Auckland University (130.216.1.5) Private Bag Ph: +64 -9 373 7999 x7367 Auckland Fax: +64 -9 373 7410 New Zealand Date: Sun, 3 May 92 21:17:46 PDT From: greg (Gregory J. Ward) To: pdbourke@ccu1.aukuni.ac.nz Subject: Re: Radiance extraction Hi Paul, That's great news about the UpFront! translator!. I wish I had this program, so I could use the translator. Regarding expanded Radiance descriptions, you can use xform with the -e option to take out all inline commands, like so: % xform -e input.rad > expanded.rad I have considered from time to time writing a program to completely polygonize a scene description, replacing all spheres and cones and even bringing in instances and converting everything to polygons, but I have never had a real need to do it so have just left it as an idea for a rainy day. -Greg ~sRadiance Digest, v2n3 Dear Radiance Users, Here is the latest culling of mail messages from users with questions about Radiance. The topics are as follows: CYLINDER SOURCES What kind of cylinder may be a light source? GETTING STARTED Documentation and modeling without CAD NUMERICAL OUTPUT Calculating point illuminances PCOMPOS Overcoming pcompos input limits USER INTERFACE Advice on creating a user interface OCONV SET OVERFLOW What causes "set overflow" error in oconv? HEIGHT FIELDS How to use gensurf to create height fields COMPILATION QUESTIONS Problems compiling on NeXT and with GNU-C ANIMATION SPLINES Using Catmull-Rolm splines for camer animation RA_PICT Transferring output of ra_pict to Macintosh LIGHT SOURCES Problems with light sources HARTMANN CONSTANT What is it? SPOTLIGHTS Questions about spotlight type I hope you find the information useful. -Greg ============================================= CYLINDER SOURCES Date: Tue, 2 Jun 92 14:46:48 NZT From: pdbourke@ccu1.aukuni.ac.nz Subject: cylinders & lights To: GJWard@lbl.gov In section 2.1.2 of the Radiance manual in the section on materials and the light material you say... "..., cylinders (provided they are long enough),...can act as light sources. What exactly (or inexactly) does "long enough" mean? ------------------------------ Paul D. Bourke School of Architecture pdbourke@ccu1.aukuni.ac.nz Auckland University Date: Tue, 2 Jun 92 12:36:38 PDT From: greg (Gregory J. Ward) To: pdbourke@ccu1.aukuni.ac.nz Subject: Re: cylinders & lights Well, it's not a fatal error if the cylinder is too short, but cylinders in general are imperfect as light sources because I send a ray to the center of the object, and if it comes in axially, then it can pass right through the source. You will get warnings if your cylindrical sources are too short. I recommend a ratio of length to diameter of 4 or more. -Greg ========================================================= GETTING STARTED Date: Wed, 3 Jun 92 04:19:06 PDT From: logan%cs@hub.ucsb.edu (Bruns) To: GJWard@lbl.gov Subject: The "Radience Reference Manual" I printed out the Mac document files in which there are many references to said manual. Where could I obtain this from? Or do I already have it and don't know it? Also, I would appreciate any advice for people who do not have access to CAD. Currently I plan to write short C rountines which would create objects in various positions based upon input parameters. (for example I might have a program which would generate a flexible desk lamp and would accept parameters to determine relative positions of its pieces.) However, I am new to this and would appreciate any advice you could give me. Sincerely yours, Logan O'Sullivan Bruns Date: Wed, 3 Jun 92 14:16:09 PDT From: greg (Gregory J. Ward) To: logan%cs@hub.ucsb.edu Subject: Re: The "Radience Reference Manual" Hi Logan, The "Radiance Reference Manual" is distributed in the form of a troff document in the ray/doc directory. It is the file "ray.1" in that directory. There is also a short tutorial, called "tutorial.1". You should at least have nroff on your system, and you can print out these files like so: % nroff -ms ray.1 | lpr % nroff -ms tutorial.1 | lpr Most sytems also have a more sophisticated text formatter, such as ditroff or psroff or troff or some such. Ask your system administrator about how to get decent troff output. I wish I could offer some helpful insights to you about modeling without CAD. I have done it for many years myself, and it's not all that easy. You can get some distance using the Radiance generators (genbox, genrev, gensurf, genprism, etc) in conjunction with the scene transformer, xform. The Mac user's manual does a pretty good job describing how this is done. Best of luck! -Greg ============================================================== NUMERICAL OUTPUT [The following is in response to some faxed questions from Xabier Gorritxategi (good thing this isn't voice-mail) about using Radiance to compute values.] Date: Fri, 5 Jun 92 09:38:43 PDT From: greg (Gregory J. Ward) FAXnumber: 01134943796944 FAXto: "Xabier Gorritxategi, IKERLAN" Status: R Dear Xabier, I am glad that you have been using Radiance so successfully in your work. You can use the program "rtrace" to get numerical output, such as luminance and illuminance values, from numerical input such as point locations. Since Radiance uses radiometric units, such as radiance and irradiance, conversion is necessary if you want to get results in photometric units. This is: photometric unit = (.3*r + .59*g + .11*b) * 179 lumens/watt The value of 179 is the luminous efficacy of uniform white light, which is the default value used by programs that take photometric units and produce Radiance source descriptions (such as ies2rad and gensky). The three coefficients, .3, .59 and .11 are the relative contributions of the standard red, green and blue to luminance. (There is actually a slightly more accurate conversion stored in ray/src/common/color.h, but this more common approximation is good enough for most applications.) As an example of producing illuminance values at regularly spaced points on a 4x5 workplane, you might use the following command: % cnt 3 4 | rcalc -e '$1=$1+.5;$2=$2+.5;$3=1;$4=0;$5=0;$6=1' \ | rtrace -h -I [options] octree \ | rcalc -e '$1=179*(.3*$1+.59*$2+.11*$3)' > outfile Note that rcalc was used to massage both the input and the output to rtrace. Rcalc is a standard program distributed with Radiance, and it has quite a number of mathematical functions not found in the standard UNIX program "awk", plus it is easier to use. (I think.) Note that the output file will not contain the point locations. If you want them, you can add them with a post-process like so: % cnt 3 4 | rcalc -e '$1=$1+.5;$2=$2+.5' | lam - outfile > values Lam is another program distributed with Radiance, and it merely concatenates lines from multiple files (one of which is the standard input in this example). I hope that this helps you with some of the more esoteric features of Radiance. I want to make these sorts of calculations more straightforward for designers, if I can only get the funding from Washington, D.C. -Greg ========================================================= PCOMPOS Date: Thu, 11 Jun 92 9:16:29 NZT From: pdbourke@ccu1.aukuni.ac.nz Subject: question To: GJWard@lbl.gov Before I start pawing over the code can you give me an idea of how hard it might be to increase the maximum frames for pcompos from 32 to a larger number. I am creating 64x64 renderings as tests for animation flightpaths and it would be useful to be able to pcompos larger matrices, for example 10x10. ------------------------------ Paul D. Bourke School of Architecture pdbourke@ccu1.aukuni.ac.nz Auckland University Date: Thu, 11 Jun 92 12:18:03 PDT From: greg (Gregory J. Ward) To: pdbourke@ccu1.aukuni.ac.nz Subject: Re: question There is a macro called MAXFILE near the top of pcompos.c that you can easily change to be whatever you want. You may have another problem though if you run into the system limit on the number of open files a process can have. Another way to construct large matrices of images is of course to create several small matrices and put them all together with pcompos at the end. -Greg ================================================================ USER INTERFACE Date: Wed, 10 Jun 92 19:18:49 -0400 From: Jim Callahan To: GJWard@lbl.gov Subject: Radiance as a tool for Architecture I just recently acquired the 2.0 version of Radiance and was *very* impressed with it. I am interested in using this program to render some of my designs for school and thought you might have some experience with this sort of thing. Have you or others developed any generator programs which are specific to architectural design? Texture functions? Objects? If so, where can I find them? Also, I am in the process of developing a Graphic User Interface (GUI) to the oconv, rpict, pcompos, and pfilt programs using the InterViews C++ library written by Mark Linton of Stanford (it is available via anonymous at interviews.stanford.edu). Like you, he has developed a very powerful program and is willing to share it for free. We all owe you for your generosity. If you are willing, I'd like to distribute my interface at your site as soon as it is up and running (and bug free I hope). I may soon be purchasing a SGI machine if I can scrape up the money and my next project will be to write a 3d interactive "front end" for designing scenes. It would allow the user to design in 3d enviroment rather than using clumsy 2d orthagonal views and allow the construction of agregate object libraries. Radiance seems to be the best "free" tool for producing the final images I have seen. Could you suggest features/options that you would like to see in something like this. I would be willing to distribute this software for no charge. I believe that the best software is written by people who need a solution to a problem NOT by money hungry programmers. These kind of tools should be shared so that we can build on each others work (but I guess you don't need to be told all this). Once again, GREAT JOB! Thanks for the program and the help. - Jim Date: Sat, 13 Jun 92 09:48:35 PDT From: greg (Gregory J. Ward) To: jmc@ugrad.ee.ufl.edu Subject: Re: Radiance as a tool for Architecture Dear Jim, Thank you for your nice letter, and for your interest in our software. I have wanted for some while to work on a user interface for Radiance, but have had neither the time nor the funds to pursue it. Of course, I would be delighted if you would work on a GUI and make it available to people. The thing that I have found to be most important in distributing software is to make sure that there is a common base. In my case, that means UNIX and K&R C (boring as that may be). I have not played with the library you mentioned, partly because I don't get around the network much so I hadn't heard of it, and partly because I don't have and cannot afford a C++ compiler. (Actually, maybe the compiler's not so expensive, but I would have to buy a new machine to run it on!) There is much work to be done before Radiance is truly useful as a design tool, I think. The main problem is that it is too general a lighting simulation, and doesn't know a thing about architecture or architectural design. Radiance has no notion of human scale or what a door is or a window or a space. It only knows geometry and materials, placing all the burden of description and interpretation on the user. This may be all right for the user with time on his hands, but it's unacceptable for the practicing designer, who needs to talk to a design tool in his or her own language. That is what a user interface is all about. I need to get to work talking to practicing architects and designers so I can start to bridge this gap between design and simulation. I may not get to the graphical part of the user interface for some time. Most of my work will entail writing programs and scripts to perform particular design evaluation tasks and take some of the burden of description and interpretation off of the user. I'm not sure I'm even the best person to work on the final graphical user interface, so I am quite open to working with someone else on that part of it. I, too, have a lot of respect for people who work long and hard on a program for little or no money, then make it available for free to others. Many would even cast such people in the role of heroes, though I might not go that far. But I'm not one of those people. I was paid well for my efforts by the Department of Energy, and have been very lucky to work on something I enjoy and get money for it besides. The reason Radiance is "free" is because it has already been paid for by government taxes! I agree with you that programs are best developed by those who are very familiar with the task involved. I wish I knew more about lighting design and architecture for that reason. My hope is to get people like you to do the real work of user interface design for me, but I don't expect you to do it for nothing. You should expect some sort of compensation for your efforts. Don't devalue yourself. Programmers doing interesting work rarely make much money -- there's no reason for the MicroSofts and IBMs of the world to take it all with our consent. Even if you're not a money-grubbing bastard, you've still got to eat. The only models and programs that I have are on the hobbes.lbl.gov ftp server in the pub subdirectories. Help yourself, and please do add to our little collection if you can. -Greg P.S. Hobbes address is 128.3.12.38. Also note that Radiance is now in release 2.1. Where is everyone getting these outdated copies?! ======================================================= OCONV SET OVERFLOW From: phils@Athena.MIT.EDU Date: Mon, 15 Jun 92 11:59:43 -0400 To: greg@hobbes.lbl.gov Subject: overflow problem. Hi Greg, We're trying to trace a BIG model are getting the following error: <79> oconv -w sky.rad plot.rad > plot.oct oconv: internal - set overflow in addobject (poly12914) This is the Washington D.C. monumental core (about 21,000) polygons. But this is a BIG Vax 9000 with 500 meg. RAM so memory should not be a problem. Any ideas? Thanks, Philip Date: Mon, 15 Jun 92 09:34:06 PDT From: greg (Gregory J. Ward) To: phils@Athena.MIT.EDU Subject: Re: overflow problem. Hi Phil, The cause of this error is either many overlapping surfaces (ie. coincident), which would probably mean an error in the model, or that the enclosing global cube is too large with respect to the detail in the model. Check out the manual page for oconv. You can try increasing the octree resolution with the -r option. Set it to the maximum overall dimension of your scene divided by the size of a relatively small surface, and you should be OK. -Greg ====================================================================== HEIGHT FIELDS To: greg@hobbes.lbl.gov Subject: Gensurf file in Radiance Date: Wed, 17 Jun 92 09:45:31 EDT From: Gilbert Leung Hi, We are running your program Radiance and want to use the gensurf command. But there is little, if any then unclear, information about the coordinate files that gensurf reads. Can you send us some more details about this or at least point us to the right direction? A/some sample file(s) that gensurf reads will be great. Sincerely yours, Gilbert Leung To: greg@hobbes.lbl.gov Subject: freadscan vs freadcolr Date: Wed, 17 Jun 92 18:29:54 EDT From: Philip Thompson Hi, I'm trying to update my ra_xim code and have noticed that you now use freadcolr a lot more often. Is freadscan now obsolete? Also, we're trying to get DEMs (Digital Elevation Maps) into radiance. Do you have an example of a height field in action? or sample input to gensurf? Thanks, Philip Date: Wed, 17 Jun 92 15:52:16 PDT From: greg (Gregory J. Ward) To: gleung@Athena.MIT.EDU, phils@Athena.MIT.EDU Subject: Re: gensurf height functions Dear Gilbert and Philip, Here is a simple example of gensurf using height information from a file: gensurf dirty ground '40*s' '30*t' height.data 4 3 -s Where height.data contains the following: 10.35 15.78 13.33 12.15 10.72 11.77 14.55 14.15 13.37 12.20 12.21 14.31 14.93 13.75 13.71 11.35 14.05 14.75 12.91 13.35 The formatting of the data file is irrelevant, but the number and ordering of points matters. Since I have specified 20 points, which is (4+1)*(3+1), gensurf will interpret these as vertex heights rather than centroid heights. If I had given anything other than 12 or 20 points, gensurf would have reported an error. The -s option performs surface normal interpolation for smoothing the data, but in any case 12 polygons would have been produced. If I wanted to take x and y coordinates from a data file as well, I would have used a command like: gensurf ugly object object.data object.data object.data 4 3 The file "object.data" should contain either 12 or 24 triplets (ie. 36 or 72 floating point values) corresponding either to the centroid or vertex locations of each surface, respectively. The triplets would appear as: x0 y0 z0 x1 y1 z1 x2 y2 z2 ... in the input file. The repetition of the file name, although awkward, was done that way to avoid more compilcated argument specification and so that I could without too much trouble allow for separate coordinate input files if someone needed that in the future. In response to Philip's question about freadcolrs(), it is much faster than freadscan(), and when combined with setcolrgam() and colrs_gambs(), can do gamma correction as well on 24-bit pixels. You can look at ra_skel.c in the src/px directory to see how this is done. -Greg ============================================================= COMPILATION QUESTIONS From: desilva@ced.berkeley.edu Subject: compile To: greg@hobbes.lbl.gov Date: Thu, 18 Jun 92 21:12:52 PDT Hello again, I'm trying compile the new version on Irix 4.0 and I can't get ximage to work. The error message I'm receiving is: X Error of filed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 1 (X_CreateWindow) Serial number of failed request: 4 Current serial number in output stream: 18 Rview appears to working just fine. Do any of the X app-defaults need to be set? All of our irises have at least 16 megs of ram and a few have more than 24 megs. Should answer yes or no the "more than 24 megs?" question. We also just got one of the new SGI Crimsons with the 64bit R4000 CPU. I was using the Crimson to compile Radiance on it. I'm still in the process of setting it up and installing Irix 4.04 on all the other machines. Deanan Date: Fri, 19 Jun 92 08:45:21 PDT From: greg (Gregory J. Ward) To: desilva@ced.berkeley.edu Subject: Re: compile Hi Deanan, You should pick up the revised files in the pub/patch directory to get rview and ximage to work better with the SGIs. I would answer "yes" to the question about "more than 24 megs". The only difference between version 3.x and 4.x of IRIX is the compiler switch to get K&R C and the fact that 3.x doesn't fully support X11. -Greg Date: Fri, 19 Jun 92 13:37:15 +0200 From: pralong@gismo.cnet-pab.fr (Frederic PRALONG) To: greg@hobbes.lbl.gov Subject: Solicitation from a Radiance user on NeXTStation Hi greg, Do you know someone who have compile radiance2R1 on a NextStation ? Thanks FReD Date: Fri, 19 Jun 92 08:48:24 PDT From: greg (Gregory J. Ward) To: pralong@gismo.cnet-pab.fr Subject: Re: Solicitation from a Radiance user on NeXTStation Hello Fred, Yes, I just recently exchange e-mail with someone struggling with Radiance 2.1 on a NeXT. There are a few bugaboos in the NeXT C compiler. Following is my response: >From greg Mon Jun 15 16:25:44 1992 Return-Path: Date: Mon, 15 Jun 92 16:25:41 PDT From: greg (Gregory J. Ward) To: linde@physics.Stanford.edu Subject: NeXT compilation Status: RO Well, what can I say? The NeXT is not a very standard compiler. I'm still amazed that it cannot manage to accept redefined library functions. This runs counter to the very spirit of the C language. Anyway, you can just add a -lrt to all the Rmakefile lines that resulted in the ld: Zstrcmp undef. errors. The only other serious errors were the -n: unknown option problems, which can be resolved by removing -n from the corresponding lines in Rmakefile. The rest is just bitching you can ignore. By the way, it seems that you do not have X11 installed on your NeXT machine. You need to read the instructions in noX11.help and follow them to avoid compiling the programs that won't work without it. Date: Wed, 1 Jul 1992 13:18:27 -0500 From: guy@guy.b30.ingr.com (Guy Streeter) To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: how was textures.pic generated? Gregory J. Ward writes: > Hello Guy, > > Indeed, anything that reports a consistency error is highly suspicious. > In theory, you should never get such an error, and it indicates an internal > bug in the program. What machine was your version of Radiance compiled on? I'm running Radiance on an Intergraph InterPro 2020, which is a RISC architecture (Intergraph Clipper C300 processor: 32-bit integer, 64-bit float) SYSV R3 system. I compiled it with GNU C v2.2. Although the system libraries are compiled with a different compiler, I've never encountered a compatibility problem. rmake looks like: #!/bin/sh exec make "SPECIAL=" \ "CC=gcc -mc300 -fwritable-strings" \ "OPT=-O -s" \ "MACH= -DALIGN=double" \ ARCH= "COMPAT=bmalloc.o erf.o" \ INSTDIR=/usr/local/bin \ LIBDIR=/usr/local/lib/ray "$@" -f Rmakefile The only part of this I'm unsure of is the COMPAT definition. I don't know what those modules are supposed to do. I haven't changed any of the definitions (-DSMLFLT, etc.) so there couldn't be any internal compatibility problems there. I tried it again with the updated commandline you sent. The results were the same. Among the peculiarities of my system, one that strikes me as a possible problem is the way subroutine arguments are passed. The compiler always puts the first two integer or pointer arguments and the first two floating-point arguments into registers, and passes the rest on the stack. This means that variable argument lists can only be handled by the varargs (or stdarg) facility. Date: Wed, 1 Jul 92 13:42:10 PDT From: greg (Gregory J. Ward) To: guy@guy.b30.ingr.com Subject: Re: how was textures.pic generated? Hmmm. I'm starting to suspect the GNU C compiler now. The other fellow who was having troubles was also using GNU C. I have never managed to compile anything using GNU C, so I gave up trying. It's too nonstandard for my taste. Do you have an alternative compiler available? The argument list strangeness shouldn't trip up Radiance, though I'll have to check and make sure. The only time I ever play with the number of function arguments is when there are optional arguments at the end of the list. They are always declared but only accessed under some conditions, when they must be passed by the caller. This basic mechanism is described in K&R, so it is supported by all C compilers I've used. What happens when you pass fewer arguments than are declared in a GNU-compiled program? -Greg Date: Wed, 1 Jul 1992 16:09:20 -0500 From: guy@guy.b30.ingr.com (Guy Streeter) To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: how was textures.pic generated? I believe I've discovered my problem. There are a number of places in the Rmakefiles where an explicit command is used to compile a source file. In those commands, 'cc' is used instead of '$(CC)', so my definition of 'CC' in rmake didn't matter. A different compiler was used for those source files. I don't know if there is something broken in that compiler, or a compatibility problem with other modules compiled with GNU C, but I changed all the Rmakefiles and rebuilt everything, making sure that all files were compiled with GNU. I'm rerunning 'texture' right now, and it's at: 2163454 rays, 18.33% done after 1.0092 hours which is better than twice as far as it got before. So, unless you hear from me again you can assume the problem is fixed. thanks, Guy Streeter Date: Wed, 1 Jul 92 14:56:28 PDT From: greg (Gregory J. Ward) To: guy@guy.b30.ingr.com Subject: Re: how was textures.pic generated? Great! I should have thought of this myself, I suppose, but I'm glad that someone did! I will go through my Rmakefile's and make sure they all reference $(CC) instead of cc, so future users don't have the same troubles. -Greg ============================================================= ANIMATION SPLINES To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: animations w/ splines Date: Tue, 30 Jun 92 15:37:02 EDT From: Philip Thompson Hi Greg, We're trying to borrow your spline animation technique from the cabin, and have figured out everything but the time parameter T(). In our first example we're just trying to fly through 4 points. My question is, if T is the time between keyframe i and i-1 why are there an equal number as there are frames? Second, if we want the speed to be fairly constant, should this time be proportional to the distance? When we set T() equal to the distance between points the animation started in the middle of the path, went to one end and doubled back. Anyway here is what we are trying in keys.cal now - but it seems awfully hit or miss. Px(i) = select(i, -6130, -4390, -2800, -2430); Py(i) = select(i, 1440, 740, 100, -40); Pz(i) = select(i, 500, 500, 500, 500); Dx(i) = select(i, 0.4, 0.4, 0.4, 0.4); Dy(i) = select(i, 0.9, 0.9, 0.9, 0.9); Dz(i) = select(i, -0.5, -0.5, -0.5, -0.5); Ux(i) = select(i, 0, 0, 0, 0); Uy(i) = select(i, 0, 0, 0, 0); Uz(i) = select(i, 1, 1, 1, 1); E(i) = select(i, 50, 50, 50, 50); T(i) = select(i, 0, 2, 2, 0); Thanks, Phil ps. I also noticed that in the cabin keys file you have "rview .... -t .35" where -t .35 seems to correspond to the T() values. How did you get these? Date: Tue, 30 Jun 92 16:34:14 PDT From: greg (Gregory J. Ward) To: phils@Athena.MIT.EDU Subject: Re: animations w/ splines Hi Phil, Yes, why indeed does T() have the same number of points as keyframes? Because, the first one doesn't count! I did it this way for convenience, I think. A zero value is appropriate there. Yes, the value for each frame should be proportional to the distance, though I would slow down a bit when making turns if you don't want to get sick. The viewfiles I created with the "view" command of rview, which allows you to tack whatever you like after the end of the view file name and it will be appended to the view line in the file. This is merely for the convenience of annotating multiple views (eg. keyframes) written out to a single file. The -t option is just what I decided to use for time. Within rview, when I had found a view I thought would work for the next keyframe, I typed: : view keyframes -t SECS where SECS was the decimal number of seconds I thought would work between the last view and this one. You're right, it's a bit of a hit and miss process. I usually do an extremely low resolution (both in time and image size) animation before starting the real thing just to check that I haven't done anything stupid. Good luck! -Greg ========================================================== RA_PICT Date: Wed, 8 Jul 92 18:45:14 -0400 From: Jim Callahan To: greg@hobbes.lbl.gov Subject: PIC to PICT problems... Greg- I think you mentioned that you use a Mac, so I thought you might know what I'm doing wrong. I am trying to view some images produced by Radiance on a Mac II-fx. These are the steps I took: 1> in UNIX: "ra_pict test.pic test.pict" 2> Moved the PICT file via ftp (in bin mode) to the Mac 3> Tried to view with many different viewers... It seem that the Mac thinks "test.pict" is a document and refuses to load it. Am I overlooking a step here? Do I need some program which will make the OS realize it is a PICT? Thanks for the help... Anyway, I am going to send my GUI (based on InterViews) for Radiance to a number of beta testers this weekend. Would you like to see it? It allows the user to set all command line options for OCONV, RVIEW, and RPICT with sliders and other widgets. In addition, it provides detailed control of image division and remote shelling over a network. I hope it will make using Radiance easier, especially for new users. Are you going to attend SIGGRAPH'92 later this month? It looks like a real "disney world" experience for the computer graphics addict... -Jim (jmc@ugrad.ee.ufl.edu) Date: Wed, 8 Jul 92 18:06:55 PDT From: greg (Gregory J. Ward) To: jmc@ugrad.ee.ufl.edu Subject: Re: PIC to PICT problems... Hi Jim, Yes, you need to modify the creator and type of the file before any tools will load it directly. The desk accessory called "disktop" allows you to make such changes. The type should be PICT and a good creator is 8BIM (used by Photoshop, I think). If you have Photoshop, you can always use the "Open as..." dialog box to load the file, even without setting the type beforehand. I would like to see your interface. Do you have a compiled version for a Sun or Silicon Graphics? I don't have Interviews or C++, so it would be difficult for me to do the compilation. -Greg P.S. Yes, I will be at Siggraph. I'll be staying with Paul Heckbert at the Hyatt if you want to get together. [Note that Siggraph will be in Orlando, home of Disney World, in '93...] ========================================================= LIGHT SOURCES From: chauvine@dingo.imag.fr (ROOT) Date: Mon, 6 Jul 92 10:32:52 MET DST Organization: IMAG Institute, University of Grenoble To: greg@hobbes.lbl.gov Subject: Radiance again Bonjour greg, Thank you for your fast reply from your bed, we hope that your are in better shape now. We have done all that you have said, and it's working (light was !). We knew your warning but understood it in a wrong way. We have a lot of things to ask you, but if you have no time, or if you are tired of us, don't hesitate to throw some of then. 1- we don't understand why when we have two light sources with the same power, the largest one in size gives more light in the scene? 2- we don't understand why there is a lost of rays when rendering a scene from 100 unit or more (we want to say that some objects appear with holes in them instead of be plain)? 3- Can you tell us how many sources of light are allowed at maximum. Remark: We have many black holes on object with plastic material, and that whatever is the size of the object. Last : We have a great interest with anisotropic reflection, and we shall be very pleased if ther is a mean to get a copy of your SIGGRAPH'92 article. Many thanks from a sunny Grenoble, laurent & jacques. -- ------------------------------------------------------------- - Chauvineau Laurent - Magistere Informatique - - - Ecole Univ. Info. - UJF - - Laboratoire ARTEMIS - - Projet Image de Synthese - - - - e_mail: chauvine@dingo.imag.fr - ------------------------------------------------------------- Date: Mon, 6 Jul 92 18:02:09 PDT From: greg (Gregory J. Ward) To: chauvine@dingo.imag.fr Subject: Re: Radiance again Hello Laurent and Jacques, I was better, but then my brother visited this weekend and now I am sick again! Oh, well. You're questions are not too difficult, so I will try to answer them. 1- we don't understand why when we have two light sources with the same power, the largest one in size gives more light in the scene? The total output of a light source is (as you have noticed) the product of the source area and the output radiance. Since radiance (the unit, not the software) is in watts/sr/m^2, you must multiply this by square area and solid angle to obtain power. Look at it this way: if you had a light filament that was as bright as the surface of the sun (as is a common quartz-halogen filament), it would have the same radiance as the sun but not the same output as has only about 10^-23 as much radiating area. Thus, radiance is related to the visible brightness of an infinitesimal fraction of the surface. 2- we don't understand why there is a lost of rays when rendering a scene from 100 unit or more (we want to say that some objects appear with holes in them instead of be plain)? What version of Radiance are you running? Version 2.0 has some problems with the -DSMLFLT option. This compile switch is used if you answer yes to a certain question in makeall about rendering very large models. You should be able to see it in the resulting rmake command placed in your Radiance bin directory. If you have version 2.0, you should remove this rmake command and run "makeall clean" followed by "makeall install", this time answer "no" to the question about large models. If you have version 2.1 and are experiencing this problem, then I would like to know about it. 3- Can you tell us how many sources of light are allowed at maximum. There is no maximum to the number of light sources, although you will find that the calculation slows down more the more light sources you add. It is best to represent your scene with as few light source surfaces as possible to obtain the fastest renderings. Last : We have a great interest with anisotropic reflection, and we shall be very pleased if ther is a mean to get a copy of your SIGGRAPH'92 article. I will send you a preprint of this article when I get back to the office. -Greg From: chauvine@dingo.imag.fr (ROOT) Date: Thu, 9 Jul 92 10:24:46 MET DST To: greg@hobbes.lbl.gov Subject: Radiance version Hello greg, We are using version 2.1 and we still have this probleme. Your new manual is very usefull and well made. Thank you for accepting to send us the preprint. laurent & jacques. Date: Thu, 9 Jul 92 15:39:36 PDT From: greg (Gregory J. Ward) To: chauvine@dingo.imag.fr Subject: Re: Radiance version Were you using the -DSMLFLT option? I just tried it on my machine, and I ran into similar problems, so I tried decreasing the value of FTINY further. You should try modifying the FTINY value in common/fvect.h so it matches the following: /* Copyright (c) 1988 Regents of the University of California */ /* SCCSid "@(#)fvect.h 2.3 7/9/92 LBL" */ #ifdef SMLFLT #define FLOAT float #define FTINY (1e-3) #else #define FLOAT double #define FTINY (1e-6) #endif #define FHUGE (1e10) /* and so on... */ ------------ Then, try cleaning and recompiling everything and see if you still have troubles. I noticed some small problems, but there's really no getting away from errors with short floating point numbers. -Greg ======================================================= HARTMANN CONSTANT From: bcurrey@neumann.une.oz.au (The Ray) Subject: Radiance - bits and pieces To: GJWard@lbl.gov Date: Wed, 5 Aug 92 13:45:30 EST Hi Greg, I have recently got your Radiance program through FTP and have successfully compiled to run on our DEC's here at Uni. I will say right now that the program is very comprehensive, and I wich I had more time o get to know it better. Anyway, I have two questions: - One, where can I get some values for the Hartmann constant you refer to in the manual regarding dielectrics. The Physics department here was stumped when I asked them for data (in fact they had never even heard of such a constant!) - Two, since I have to keep this in a temp directory on this system (because of the programs size) what files do I really need for the program to run. I'm trying to cut down on disc usage as much as possible. Any help would be greatly appreciated. Boyd bcurrey@neumann.une.oz.au Date: Wed, 5 Aug 92 09:17:23 PDT From: greg (Gregory J. Ward) To: bcurrey@neumann.une.oz.au Subject: Re: Radiance - bits and pieces Hi Boyd, The Hartmann constant can be found in the handbook of Chemistry and Physics, or fitted manually based on a list of (wavelength,refractive_index) pairs. Basically, the final index of refraction is computed by: n = n0 + Hartmann_constant/wavelength where wavelength is given in nanometers. Note that this constant ONLY has effect for light scattered from light sources, and is normally irrelevant to the calculation. That is why it is usually left as zero, ignoring the wavelength dependence of the index of refraction. As for your second question, only the binaries, the library directory (ray/lib in the distribution) and any models you want to use need be kept around. The source code and the rest can be removed once the programs have been successfully compiled. -Greg ========================================================== SPOTLIGHTS Sender: MAILER%PLWRTU11.BITNET Date: Thu, 27 Aug 92 15:43:19 CET From: SJK%PLWRTU11.BITNET@Csa3.lbl.gov To: greg@hobbes.lbl.gov Subject: Spotlights in RADIANCE; antimatter Hello Greg, recently we (with Karol Myszkowski) spent some time experimenting with RADIANCE. Our goal is to compare accuracy/speed of RADIANCE with TBT using similar scenes. Actually Karol is absent (short holidays) so I continue investigations alone. I am a novice in RADIANCE, so I apologize for asking about probably well known topics. I encountered some difficulties when I try to model directed light sources by means of spotlight (with RADIANCE 2.1) and I have not find answers to my questions neither in the manual nor in digest files. 1. What focal distance means? I am unable to see any difference between spotlights with the same angles but different lengths of orientation vectors. 2. Visibility of spotlight. I noticed that the spotlight is visible even when the observer is located at a point not illuminated by this light (see the picture below). Is it correct? 3. Secondary spotlights. The picture shows that a secondary spotlight acts as if it is undirected. The top face of the golden cube is illuminated only partly, however the illuminated area on the wall is produced by the whole square. Below is the source file (tc.rad): ------------------------------------ void spotlight dir_source 0 0 7 10000 10000 10000 90 0 0 -1 void plastic white 0 0 5 .7 .7 .7 0 0 void mirror polished_gold 0 0 3 .45 .25 .02 !genbox white room 10 10 10 -i !genbox polished_gold gold_box 3 3 3.5 | xform -t 0 4 0 # Light source dir_source ring light 0 0 8 5 3 9 0 0 -1 0 0.1 ----------------------------- I have used the command rpict -vf $(name).vf -x 300 -dr 1 -ab 1 -av 0.05 0.05 0.05 -t 10 \ $(name).oct >$(name).pic with tc.vf containing: rview -vtv -vp 9 9 5 -vd -1 -1 0 -vu 0 0 1 -vh 90 -vv 90 -vs 0 -vl 0 -------------------------- 4. Are goniometric diagrams possible in RADIANCE? In view of very general mechanism to specify reflectance and transmittance (BRTDfunc) I was surprised not found something like ULEDfunc (Unidirectional Light Emitting Distribution) to specify a dependency of power of a light source on emitting direction. Is it possible to specify such dependency in some way? If not, maybe it is worth to include such possibility in future version? 5. Antimatter remark. The method to produce a hole with different surface material that You suggested in v2n2, lines 251--271 (ANTIMATTER, answer to David Jones, 17 Jan 92) do not work (I tried to make sphere hole in cubical object instead of cylinder). It do work (in my example) after the following changes of AMAT2 definition: void antimatter AMAT2 2 MAT2 MAT1 0 0 # The order of MAT2 MAT1 is significant! With best regards, Andrei Khodulev: sjk@plwrtu11.bitnet Date: Thu, 27 Aug 92 18:13:36 PDT From: greg (Gregory J. Ward) To: SJK%PLWRTU11.BITNET@Csa3.lbl.gov Hello Andrei, Thank you for your excellent questions. I'm afraid that I will not be able to give you a complete answer right away. Briefly, 1. The focal length produces only a subtle difference as it shifts the effective position of the source behind the actual emitting surface. A real spotlight has the effect of changing its falloff to being something less than r^-2 (where r is the distance to the emitter) in this way. The actual falloff is closer to (r+e)^-2, where e is the focal length of the spotlight's optics. It is a minor concern, I admit. 2. This is the way the spotlight material behaves. The spotlight type is primarily to make a more efficient calculation by ignoring that portion of a light's output that is insignificant. It is usually used in conjunction with a light source pattern, described briefly in the answer to your question 4 (below). 3. Since I won't be able to run your file right away, I will have to answer this question later. 4. Yes, light source emissions are specified using the more general pattern types brightfunc or brightdata. (Colorfunc and colordata may also be used if you wish to give your light source a color-dependent distribution.) A simple example is as follows: void brightfunc cos3dist 2 cos(Dz)^3 . 0 0 cos3dist light con_light 0 0 3 100 100 100 con_light ring con_lamp 0 0 8 5 8 3 0 0 -1 0 .06 You should look at the files in the ray/lib/source/ies directory for examples of the brightdata primitive and its application to light sources. 5. I'll have to go back and look at the question to which you refer. Yes, order does matter. The first material given is the one used at the antimatter boundary. -Greg ~s Radiance Digest, v2n4 Dear Radiance Users, It's time for another collection of questions and answers on Radiance. If this digest is unwelcomed junk mail, please write to GJWard@lbl.gov to have your name removed from the list. Here is a list of topics for this time: VIDEO - Simulating video photography with Radiance INTERREFLECTION - Diffuse interreflection accuracy PENUMBRAS - Generating accurate penumbras HEIGHT_FIELDS - Generating colored height fields INSTANCES - Octree instancing problems CONSTANTS - Constant expressions in .cal files IMAGES - Image formats, gamma correction, contrast and colors GENERAL - Some general questions about global illumination and rendering TEXDATA - Using the texdata type for bump mapping CSG - Using antimatter type for destructive solid geometry We have been having poor communication lately with our DOE managers back in Washington, DC. Because of this, I may soon ask for your feedback on plans for transfer of Radiance to a wider user community. -Greg ========================================================= VIDEO - Simulating video photography with RADIANCE Date: Fri, 28 Aug 92 14:39:57 CDT From: pandya@graf6.jsc.nasa.gov (Abhilash Pandya) Apparently-To: GJWard@lbl.gov Greg- In our work, we are trying to generate accurate maps of lighing. Your program provides us with accurate radiance values at each pixel in an image. We would like to produce images that an eye or camera will produce. These systems have mechanisms to filter the images with iris and lens control. Do you have information on how this transformation can be done? We are able to apply scale factors to make the images look realistic, but these are guesses. By the way, your package is a very good one, in just 2 weeks we were able to trace complex space shuttle lighting very easily. Nice work. Pandya. Date: Fri, 28 Aug 92 13:24:34 PDT From: greg (Gregory J. Ward) To: pandya@graf6.jsc.nasa.gov Subject: clarification Hello Pandya, I am glad you have had some success with your shuttle lighting work. I would be very interested to see any results you are willing (and able) to share. Could you clarify your question for me a bit, please? Do you want to reproduce the automatic iris and shutter control found in cameras? Do you wish to model also depth of field? I do have some formulas that can tell you roughly how to set the exposure value to correspond to a given f-stop, ASA and shutter speed of a camera, but the automatic exposure control of cameras varies quite a bit from one make of camera to another. -Greg Date: Thu, 3 Sep 92 17:29:59 PDT From: greg (Gregory J. Ward) To: pandya@graf6.jsc.nasa.gov Subject: camera simulation > 1. We are planning to run an experiment in a lighting lab where > we measure the light distribution and material properties for > Shuttle and Station applications. Our overall goal is to compare > the output of a camera (with the fstop, film speed, shutter speed > and development process gamma all known) with a radiance output for > a test case. How do we process the radiance output to emulate the > camera image? We would be interested in the formulas you mentioned > and also any reference list that deals with validation of your > model. Here is the note on film speed and aperture: Francis found the appropriate equation for film exposure in the IES handbook. There isn't an exact relation, but the following formula can be used to get an approximate answer for 35mm photography: Radiance EXPOSURE = K * T * S / f^2 where: T = exposure time (in seconds) S = film speed (ISO) f = f-stop K = 2.81 (conversion factor 179*PI/200) This came from the IES Lighting Handbook, 1987 Application Volume, section 11, page 24. So, if you were trying to produce an image as it would appear shot at 1/60 sec. on ASA 100 (ISO 21) film at f-4, you would apply pfilt thusly: pfilt -1 -e `ev "2.81*1/60*21/4^2"` raw.pic > fin.pic > 2. We would like to extend the static case (#1) to a dynamic case > where we can model the automatic iris control and fstop found in > the eye and also video cameras. We have information on how the > video uses average ambient light to adjust the iris aperture > (circuit diagrams). We know how the fstop is computed dynamically > (using infared rays to detected the neareast surface). What > approach do you suggest? I assume you meant to say "focus" in the penultimate sentence above. Currently, "depth of field" simulation is not directly supported in Radiance. In effect, an infinite f-stop is always used with results in unlimited depth of field (ie. as from a perfect pinhole camera). If you wish to fully model the dynamic exposure compensation of a video camera, you will have to use different exposure values for pfilt as above, but on a per-frame basis. > 3. We need to find a scale factor to be used in the falsecolor > routine that corresponds to the actual range of illuminance in > the image. The default value may saturate the image in certain > regions. How do we find the optimal scale value in nits without > trial and error? Ah, yes. A fair question. It just so happens that until recently there was no way to determine the maximum value in an image. I have just written a program called "pextrem" that quickly computes the minimum and maximum values for a Radiance picture. This program will be included in version 2.2 when it is released this fall. I have appended it for your benefit along with an improved version of falsecolor at the end of this message. > We will be glad to share the information on the results of our study > when we are at that stage. I'd love to see it! -Greg ========================================================= INTERREFLECTION - Diffuse interreflection accuracy Date: Mon, 31 Aug 92 23:59:53 CET From: SJK%PLWRTU11.BITNET@Csa3.lbl.gov To: greg@hobbes.lbl.gov Subject: Diffuse interreflection Hello Greg, Thank you for your excellent answers to my (excellent? Hmmm) questions. I have really overlooked a possibility to specify angle dependencies in brightfunc. I have one more question. It is not urgent (as well as previous ones) so don't worry about them if you are busy with something else. Now I try to investigate diffuse interreflection calculation in RADIANCE. I began with a cubic room covered with totally diffusive white plastic (reflectivity 2/3) and a single small light source inside. The diffude interreflection in this case should produce ambient light with total energy twice as large as the energy of the light source. Analysing results I noticed that some small error (5-10%) remains even after 10 iterations. Further investigation revealed that the same problem exists for the simplest case of a sphere with light source at its center. So my question is (numbering continues the previous letter): 6. How to improve diffuse interreflection accuracy? Consider the following scene: void light white_source 0 0 3 10000 10000 10000 void plastic white 0 0 5 .667 .667 .667 0 0 white bubble room 0 0 4 5 5 5 5 # Light source white_source sphere central_source 0 0 4 5 5 5 0.1 I used parameters: -vtv -vp 5 5 4 -vd 0 0 -1 -vu 0 1 0 -vh 120 -vv 120 -x 100 -ab 5 -t 30 Due to full symmetry we can calculate ambient light exactly and not only the final value but even the value after any number of ambient iteration. The surface brightness (constant) after n iterations should be following (neglecting absorption in the light source): B = r^2/R^2 * C * P * d * (1+d+d^2+...+d^n) where B is the brightness in nits; r is the radius of the light source; R is the radius of the room; C is constant conversion factor = 179 lumens/Watt; P is power density of the light source (Watt/m^2/sr); d is the surface reflectivity. The results for the example above are shown in the following table: -ab n Theory RADIANCE ------------------------------- 0 477 477 1 795 797 2 1007 1015 4 1242 1295 5 1305 1351 6 1347 1362 10 1414 1362 infty 1432 (1362?) ------------------------------- So, we can see that till n=2 the accordance is perfect, then RADIANCE begins to overestimate ambient light, but after six iterations saturation occurs so that the final value is underestimated. Is it possible to achieve more accurate calculation of ambient light? What parameter is responsible for it? I tried to vary values of -ad, -aa, -lr, and -lw parameters with no effect. Andrei Khodulev, sjk@plwrtu11.bitnet Date: Mon, 31 Aug 92 22:37:37 PDT From: greg (Gregory J. Ward) To: SJK%PLWRTU11.BITNET@Csa3.lbl.gov Subject: Question #6 Hello Andrei, The reason that Radiance never converged in your example problem is that each successive interreflection uses half as many sample rays. (See the 1988 Siggraph article on the technique for an explanation.) With so many bounces, you dropped below the one ray threshold at about the 7th bounce, which is why no further convergence was obtained. To get better convergence, you would have to decrease the value of -lw (to zero if you like), increase -lr (to 12 or whatever), and ALSO increase the value of -ad to at least 2^N, where N is the number of bounces you wish to compute. By the way, Radiance assumes that your average surface reflectance is around 50%, which is a good part of why your 67% reflectance room shows poor convergence with the default parameter values. I could have used the actual surface reflectance to guide the calculation, but that would cause problems with the reuse of the indirect irradiance values. The preferred way to get a more accurate value is to estimate the average radiance in the space and set the -av parameter accordingly. I wish there were a reliable automatic way to do this, but there really isn't one, which is why the default value is zero. In your example, the correct ambient value specification would be 1432/179, which is 8 W/sr/m^2. Of course, you would obtain convergence with this value right away. As for the overestimation of values for 3-6 bounces, it's conceivable that Radiance would be off by that much, but it's more likely you're just seeing the errors associated with the Radiance picture format, which at best keeps within 1% of the computed values. I tried the same experiment with rtrace (and the default parameter values) for -ab 6, and got a result of 1349 nits, which is within .1% of the correct value of 1350 nits. (Note that you should have used .667 instead of 2/3 for the surface reflectance in your calculations, since that's what you put in the input file.) I want to thank you once more for setting up such an excellent test scene. I really should be paying you for all your good work! -Greg ========================================================= PENUMBRAS - Generating accurate penumbras Date: Tue, 1 Sep 92 17:16:49 PDT From: wex@rooster.Eng.Sun.COM (Daniel Wexler) To: greg@hobbes.lbl.gov Subject: Penumbra problems Greg, We have been toying with the command line arguments to Radiance to achieve nice soft shadows. Unfortunately we have been cursed with severe aliasing. I have put an example image in the xfer account on hobbes (aliased_ball.pic). I think the problem is obvious. We use pfilt to achieve supersampling, but the aliasing will not go away until the artifacts in the original image are eliminated. Essentially, we would like the most accurate image regardless of computation time. If you know what arguments would achieve this result, that would be great. I don't think we need to use any ambient calculation for these images, but please correct me if I'm wrong. Thanks, Dan Here is the command we used to create the image: rpict -x 1000 -y 1000 -vtv -vp -5.112623 -7.815219 -3.025246 -vd 0.177627 0.917738 0.355254 -vu -0.000000 -1.000000 -0.000000 -vh 63.985638 -vv 63.985638 -ps 2 -dj 0.5 -pj 1.0 -ds 0.00001 -dc 1.0 NTtmp.oct > NTtmp.pic And here is the radiance file; note that the modeller outputs a separate file for each object, and uses xform to position them: void plastic gray_plastic 0 0 5 0.7 0.7 0.7 0.05 0.1 ############################# # PRIMITIVES: void light white_light 0 0 3 40000 40000 40000 !xform -e -m white_light big_light.obj void metal bronze_metal 0 0 5 0.9 0.3 0.0 0.0 0.0 !xform -e -m bronze_metal -rx 89.996984 test_ring.obj !xform -e -m bronze_metal -s 0.300000 -t -2.000000 0.000000 0.000000 test_planet.obj -----------big_light.obj--------------- white_light sphere big_light 0 0 4 0.0 0.0 0.0 1.0 -----------test_ring.obj--------------- bronze_metal ring test_ring 0 0 8 0.0 0.0 0.0 0.0 0.0 1.0 1.000000 10.000000 -----------test_planet.obj------------- bronze_metal sphere test_planet 0 0 4 0.0 0.0 0.0 1.0 Date: Tue, 1 Sep 92 21:03:49 PDT From: greg (Gregory J. Ward) To: wex@rooster.Eng.Sun.COM Subject: Re: Penumbra problems Hi Dan, Well, I'm not sure I can really tell which artifacts you are talking about, since I'm doing this from home and printing your picture out on a dot matrix printer. If you are referring to the patterns apparent in the penumbra and even the test_ring object, that is a result of the anticorrelated sampling strategy used by rpict. The standard jittering techniques use a psuedo-random number generator for stochastic sampling. Radiance uses a sequence of anticorrelated samples (based on the method described by Schlick in his 1991 Eurographics Rendering Workshop paper) that converges faster than a purely random sequence, but is not without artifacts. One can actually choose the final appearance of the artifacts, and I've chosen sort of a brushed look in rpict. To really get away from artifacts, you will have to use 3 or 4 times oversampling, eg: rpict -x 4096 -y 4096 ... octree | pfilt -1 -x /4 -y /4 -r .7 > output.pic Regarding your other arguments, you should try the following: -ps 1 -dj 0.5 -pj .9 -ds 0.1 The -ds value you used is really much higher than necessary, and has no effect with spherical light sources anyway (which is part of your problem with this particular scene). If you want to get rid of the brushed appearance, you can modify the random.h header by defining urand() to be the same as frandom(), though you will get a noisier (higher variance) result: #define urand(i) frandom() One place you will not easily eliminate spatial aliasing in Radiance is at the boundaries of light sources. Since all calculations, including image filtering, is done in floating point, very large differences in neighboring pixel values will continue to cause ugly jaggies even at large sample densities. The only way around this is to cheat by clipping prior to filtering, a step I choose to avoid since it compromises the integrity of the result. Let me know if these suggestions aren't enough. -Greg ========================================================= HEIGHT_FIELDS - Generating colored height fields Date: Thu, 3 Sep 92 17:30:24 PDT From: greg (Gregory J. Ward) To: fsb@sparc.vitro.com Subject: Re: Radiance Digest, v2n3 Dear Steve, > OK I tried this and get a brown looking surface when I give it > brown plastic modifier. It uses the same modifier for every patch. > Is there a way to make the modifier select a color according to > elevation? Like below a certain point is blue for water, and then > green, and then on up is brown, and then the highest elevations > are white? I haven't been using this package for very long so am > not really that familiar with how to do things yet. The usual way to see the height field is to insert a light source (such as the sun as output by gensky) and the lighting will show it to you naturally. If you want to do some fun stuff with colors, you can use a pattern based on the Z position of the surface point, eg: # A1 is level of water, A2 is level of snow void colorfunc ranges 4 r_red r_grn r_blu ranges.cal 0 2 1.5 3.5 ranges plastic ground_mat 0 0 5 1 1 1 0 0 ---------------------------------- ranges.cal : { Select water or ground or snow depending on altitude } { A1 is water level, A2 is snow level } { move from green to brown between A1 and A2 } lp = (Pz-A1)/(A2-A1); r_red = if(-lp, .02, if(lp-1, .75, linterp(lp,.1,.5))); r_grn = if(-lp, .2, if(lp-1, .75, linterp(lp,.5,.3))); r_blu = if(-lp, .4, if(lp-1, .75, linterp(lp,.1,.1))); -Greg ========================================================= INSTANCES - Octree instancing problems From: Environmental Design Unit Date: Thu, 17 Sep 92 16:00:11 BST To: greg@hobbes.lbl.gov Subject: Re: instancing octrees Hello Greg, I'm getting some strange behaviour from "oconv" when instancing octrees. I've made a single storey description of a building and created the (frozen) octree (~0.5Mb). A five storey octree can be made virtually instantly, whereas with 6 or more, "oconv" seems to get hung, gradually soaking up more memory. I let one run over lunch and it still didn't finish! I've tried increasing the resolution and setting a bounding box, but to no effect. Am I right in thinking that it is, in fact, something to do with the bounding-box? I see that version 2R2b is on pub/xfer, should I be using it? Regards, -John Date: Thu, 17 Sep 92 17:56:17 PDT From: greg (Gregory J. Ward) To: edu@de-montfort.ac.uk Subject: Re: instancing octrees Hi John, Never mind my previous response. I fooled around with the problem a bit, and realized that the real difficulty is in resolving the octree instances' boundaries. Because your stories are (presumably) wider and longer than they are high, the bounding cube determined by oconv for the original frozen octree extends quite a bit above and below the actual objects. (I suppose that oconv should start with a bounding parallelepiped rather than a cube, but there you are.) When you subsequently stack your octrees to create a building, the vertical faces of the corresponding bounding cubes are largely coincident. As you may or may not know, oconv will then resolve these coincident faces to the resolution specified with the -r option (1024 by default). This can take quite a long time. There are two possible solutions. The best one is probably to reduce the value of -r to 200 or so, provided that you don't have a lot of other detail in your encompassing scene. The other solution is to increase the value of the -n option to the number of stories of your building, or to the maximum horizontal dimension divided by the story height, whichever is smaller. Ideally, the instanced octrees should not significantly overlap. As you noticed, it's even worse when the faces of the bounding cubes are coplanar and overlapping. Hope this helps! -Greg P.S. The behavior of oconv used to be MUCH worse with regards to overlapping instances. It used to try to resolve the entire intersecting VOLUME to the maximum resolution! ========================================================= CONSTANTS - Constant expressions in .cal files Date: Thu, 24 Sep 92 11:41:57 -0400 From: David Jones To: greg@hobbes.lbl.gov Subject: Re: radiance 2.1 change with "cal" files?? In looking in your "ray.1" and trying to understand my error, I got confused about "constants". I had pondered arg(n), but since it had worked before, I dismissed it. I must admit I don't understand the concept of a "constant function". Can you elaborate? ... and does declaring something as a "constant" really translate into much of a savings? as always, thanks for your help, dj Date: Thu, 24 Sep 92 08:52:47 PDT From: greg (Gregory J. Ward) To: djones@Lightning.McRCIM.McGill.EDU Subject: Re: radiance 2.1 change with "cal" files?? Hi Dave, The savings garnered from a constant expression depends on the complexity of the expression. When expensive function calls are involved, the savings can be substantial. A constant function is simply a function whose value depends solely on its arguments. All of the standard math functions have the constant attribute, as do most of the additional builtin functions. Even the rand(x) function has the constant attribute, since it returns the same pseudorandom number for the same value of x. Functions and variables that somehow depend on values that may change due to a changing execution environment or altered definitions must not be given the constant attribute or you will get inconsistent results. This is because the expression is evaluated only once. Remember also that constant subexpressions are eliminated, so by using constant function and variable definitions, you save in any expression that refers to them. I hope this explains it a little better. -Greg ========================================================= IMAGES - Image formats, gamma correction, contrast and colors Date: Sat, 3 Oct 92 19:42:12 -0400 From: "Jim Callahan" To: greg@hobbes.lbl.gov Subject: Exposure & PS(TIFF) Hi Greg- I understand that Radiance stores images as 32-Bit RGB. How does an adjustment of exposure effect the colors displayed. Obviously it affects the brightness of the image, but what are the differences between exposure and gamma correction? Are both needed? If a light source is too dim, I want to know in absolute terms. This is a bit confusing to me because I realize that the eye is constantly readjusting its exposure. I would like to be able to say that the image is a "realistic" simulation of a scene, but can this really be done? Also, do you have any experience with encapsulated PostScript as a image format. I can convert to TIFF with the "ra_tiff" program but I don't know where I should go from there. By the way, what kind of Indigo are you considering? I got a chance to see the R4k Elan here in Gainesville and it was impressive. We calculated that it would be faster than the whole 17 machine network I use now in terms of floating point operations! See ya later... -Jim Date: Sun, 4 Oct 92 11:04:43 PDT From: greg (Gregory J. Ward) To: jmc@sioux.eel.ufl.edu Subject: Re: Exposure & PS(TIFF) Hi Jim, You've touched on a very complicated issue. The 32-bit format used in Radiance stores a common 1-byte exponent and linear (uncorrected gamma) values. This provides better than 1% accuracy over a dynamic range of about 10^30:1, compared to about 3% accuracy over a 100:1 dynamic range for 24-bit gamma-corrected color. Changing the exposure of a Radiance image changes only the relative brightness of the image. Gamma correction is meaningful only in the presence of a monitor or display device with a power law response function. Gamma correction is an imperfect attempt to compensate for this response function to get back linear radiances. Thus, applying the proper gamma correction for your monitor merely gives you a linear correlation between CRT radiance and the radiance value calculated. (Radiance is named after the value it calculates, in case you didn't already know.) However, as you correctly pointed out, linear radiances are not necessarily what you want to have displayed. Since the dynamic range of a CRT is limited to less than 100:1 in most environments, mapping calculated radiances to such a small range of dispayable values does not necessarily evoke the same response from the viewer that the actual scene would. The film industry has known this for many years, and has a host of processing and exposure techniques for dealing with the problem. Even though computer graphics provides us with much greater flexibility in designing our input to output radiance mapping, we have only just begun to consider the problem, and it has not gotten nearly the attention it deserves. (If you are interested in learning more on the topic, I suggest you check out the excellent CG+A article and longer Georgia Tech technical report by Jack Tumblin and Holly Rushmeier.) Color is an even stickier problem. Gary Meyer and others have explored a little the problem of mapping out-of-gamut colors to a CRT, but offhand I don't know what work has been done on handling clipped (over-bright) values. This is another interesting perceptual issue ripe for exploration. The best you can currently claim for a computer graphics rendering is that photography would produce similar results. Combined with accurate luminance calculations, this should be enough to convince most people. In absolute terms, the only way to know is by understanding lighting design and luminance/illuminance levels appropriate to the task. It will be many years before we will have displays capable of SHOWING us unambiguously whether or not a given lighting level is adequate. I think encapsulated PostScript is just PostScript with embedded data (such as a PICT image) that makes it easier for other software to deal with since it isn't then necessary to include a complete PostScript interpreter just to display the file contents. Such files are used commonly in the Macintosh and other desktop publishing environments. Russell Street of Aukland University wrote a translator to PICT format, and I have recently finished a translator to black and white PostScript. Paul Bourke (also of Aukland University) said he was finishing a color PostScript translator, so we might have that available soon as well. (Personally, I think PostScript is a terrible way to transfer raster data -- the files are humungous and printing them tries my patience.) If you are going to a Mac environment, I still think TIFF or PICT are your best bets. I am getting a R4000 Indigo XS24. It seems to perform very well with Radiance, outpacing my Sun 3/60 by a factor of about 30! -Greg ========================================================= GENERAL - Some general questions about global illumination and rendering Date: Mon, 5 Oct 92 10:07:33 +0100 From: u7x31ad@sun4.lrz-muenchen.de To: greg@hobbes.lbl.gov Subject: Radiance and Mac High Greg, i am a student here at munich university and on striving through Internet i came across Your Radiance-SW. Since i've been interested in Computer Graphics for quite a long time already i was very happy to find something like Radiance. Is there any possibilty to get the radiance-system running on a Macintosh. The system i have is a Qudra 950, 64/520MB, 16"RGB Screen. What i want to do is to create photorealistic pictures of rooms etc. but not only with raytracing. What i am looking for is a combination from both: Raytracing & Radiosity. Do You know any SW that uses a method also calculating specular refelctions on surfaces? An adition i am thinking about a methode to include the characteristics of the various types of lamps used to give light to a scenery. But not to take too much of Your time - if Your interested please let me know and i will try to explain it in better english Thank You Christian von Stengel u7x31ad@sun4.lrz-muenchen.de Date: Mon, 5 Oct 92 09:35:13 PDT From: greg (Gregory J. Ward) To: u7x31ad@sun4.lrz-muenchen.de Subject: Re: Radiance and Mac Hello Christian, Currently, the only way to get Radiance running on the Macintosh is to get Apple's A/UX product. This is an implementation of UNIX System V with Berkeley extensions, and the current distribution (3.0) includes X11 as well. It costs about $600.00 in the States and takes up about 160 Mbytes of disk space. The good news is that you can still run most of your Mac software under A/UX (and note that you don't HAVE to run A/UX if you don't want to just because you installed it), and I use Radiance with A/UX all the time and have found it to be quite reliable. I have not ported Radiance to the native Mac OS, primarily due to lack of time and motivation. If you have used Radiance, you know that it is not a menu-based application, and thus doesn't fit into the Macintosh environment very well. Someday, when a proper user interface is written for the software, we can look more seriously at integrating into the Mac world. As far as I know, Radiance is the only free software that accounts for arbitrary diffuse and specular interactions in complicated geometries. It does not follow the usual "radiosity" finite element approach, but it does calculate diffuse interreflections and is essentially equivalent in functionality to so-called radiosity programs. If you want to combine ray-tracing and radiosity, I think you will have a difficult time doing better than what Radiance does already. Radiance also handles arbitrary light source distribution functions and secondary light source reflections, so you should examine and understand these capabilities before embarking on any additional programming. If after close scrutiny of Radiance's capabilities you find it lacking in certain areas, please feel free to use the source code as a starting point for your own explorations into global illumination. Be warned, though, that much work has been done by many people in this area already, and you should do your research carefully if you want to avoid duplicating work. On the other hand, I have often found that duplicating other people's work in ignorance is a good way to familiarize oneself with a problem. I do not wish to discourage your interest. There are many problems in global illumination and rendering that are largely unsolved and even unaddressed. Human perception is a good example of just such a problem. No one really knows what goes on between the display screen and the brain, or how to create a display that evokes the same subjective response from the viewer as the real scene would have. Holly Rushmeier, Gary Meyer and Jack Tumblin have done some pioneering work in this area, but there is much work still to do before we have some satisfactory answers. I wish you the best of luck in your work! -Greg Date: Thu, 15 Oct 92 19:32:49 -0400 From: macker@valhalla.cs.wright.edu (Michael L. Acker) To: greg@hobbes.lbl.gov Subject: Radiance Question I've been trying to learn your Radiance 2.1 package and apply it to rendering the lobby of a large building on my campus. (Another student and I are doing this as part of an independent graphics study course.) I have a couple questions I was hoping you could answer. 1) We have a large skylight in the roof of the lobby. To simulate this in our model, I followed the example in the tutorial document you provide with the Radiance package. (At the end of the tutorial you create a window that can transmit light and that can be seen through.) The lobby is completely enclosed and the only light sources are what we've created inside (some track lighting and recessed incandescent lights) and the light from the skylight. Before I added the skylight, the light from the light sources was sufficient to 'look' around the room (I didn't need to add the -av option in rpict). But when I add the skylight with the simulated sky as a new light source, the amount of light is blinding. I have to use 'ximage -e -6 ...' to see anything. How can I turn down the intensity of the light from the sky? I'm not picking up the info (so far) out of the documentation. As I said, I used the method you described in the tutorial. (I'm also including the artificial ground as in the tutorial because I plan to put some first floor windows in later.) 2) Can you recommend any of your examples (or documentation) on how to put a pattern on a surface? We're simulating a clear glass brick wall made up of many small bricks by using one large polygon of glass. But we need to simulate the grout (between the actual bricks) on the large glass polygon. I could just overlay white polygon strips over the glass polygon, but the pattern function should be applicable here. Any suggestions? Thanks, --Mike Mike Acker,macker@valhalla.cs.wright.edu Date: Fri, 16 Oct 92 10:34:44 PDT From: greg (Gregory J. Ward) To: macker@valhalla.cs.wright.edu Subject: Re: Radiance Question Hello Mike, In answer to your first question, it sounds as if you are doing nothing wrong in your modeling of a skylight. It is quite normal for a Radiance rendering to require exposure adjustment, either brighter or darker, prior to display. Pfilt is the usual program to accomplish this. Whereas most rendering programs produce 24-bit integer color images, Radiance produces 32-bit floating point color images, and there is no loss of quality in adjusting the exposure after the rendering is complete. (Normally, this would wash out a 24-bit rendering.) It is important NOT to change the value of your light sources just to get a rendering that is the right exposure, since you would lose the physical values that Radiance attempts to maintain in its simulation. (For example, the 'l' command in ximage would produce meaningless values.) As for your second question, you can affect the transmission of the "glass" or "dielectric" types with a pattern, but you cannot affect their reflection, since that is determined by the index of refraction which is not accessible in this way. Thus, you could produce dark grout with a pattern, but not light grout, because the reflectance of glass is fixed around 5%. If you want white grout, I would use the -a option of xform to place many polygonal strips just in front and/or behind the glass. The impact on the calculation time should be negligible. -Greg ========================================================= TEXDATA - Using the texdata type for bump mapping Date: Wed, 21 Oct 1992 13:19:48 +0800 From: Simon Crone Apparently-To: GJWard@lbl.gov Hello Greg, I am after information on how to use the data files for the Texdata type. I want to be able to use a Radiance picture file as a texture 'map'. Ie. using the picture file's red value to change the x normal, the blue value to change the y normal and the z value to change the z height. How might I go about this? If you could supply an example, that would be great. Many thanks, Simon Crone. Date: Wed, 21 Oct 92 11:48:02 PDT From: greg (Gregory J. Ward) To: crones@cs.curtin.edu.au Subject: texture data Hi Simon, There is no direct way to do what you are asking in Radiance. Why do you want to take a picture and interpret it in this way? Is it merely for the effect? If you have a picture and wish to access it as data in a texdata primitive, you must first convert the picture to three files, one for red (x perturbation), one for green (y perturbation) and one for z (z perturbation -- not the same as height). I can give you more details on how to do this if you give me a little more information about your specific application and need. -Greg Date: Thu, 22 Oct 1992 05:06:06 +0800 From: Simon Crone To: GJWard@lbl.gov Subject: Texture-data Hi Greg, The reason I wish to interpret picture files as texture data is as follows; The raytracing program ( CAN Raytracing System ) that is being used in our Architecture department contains a number of texture pictures or "bump maps" that are used for various materials definitions. I am currently converting the raytrace material list ( around 80+ materials ) to radiance material descriptions. It would be a lot easier if I could use the existing raytrace "bump map" pictures to perturb materials rather than creating new procedural pattern. A prime example of this is a water texture. The raytrace program has a very realistic water pattern, while my efforts to create such a procedural pattern have led to some fascinating, if not realistic textures ( The Molten Murcury pool is my favourite! ) The blue channel ( z ) is used as a height for calculation of shadows across a perturbed surface in the raytrace program and does not perturb the z normal. I realise this may not be possible in Radiance. I hope this helps. Simon Date: Wed, 21 Oct 92 17:50:17 PDT From: greg (Gregory J. Ward) To: crones@cs.curtin.edu.au Subject: Re: Texture-data Hmmm. Sounds like a nice system. Who makes it (CAN)? What does it cost? Anyway, you are correct in thinking that Radiance does not provide height- variation for shadowing, so this information may as well be thrown away. First, you need to put your x and y perturbations into two separate files that look like this: 2 0 1 height 0 1 width dx00 dx01 dx02 ... dx0width dx10 dx11 dx12 ... dx1width . . . dxheight0 dxheight1 dxheight2 ... dxheightwidth Replace "height" with the vertical size of the map (# of points), and "width" with the horizontal size. The y perturbation file will look pretty much the same. (The line spacing and suchlike is irrelevant.) Let's say you named these files "xpert.dat" and "ypert.dat". Next, decide the orientation of your surface and apply the texture to it. For a surface in the xy plane, you might use the following: void texdata my_texture 9 pass_dx pass_dy nopert xpert.dat ypert.dat ypert.dat tex.cal frac(Px) frac(Py) 0 0 my_texture plastic water 0 0 5 .1 .2 .6 .05 0 water ring lake 0 0 8 0 0 0 0 0 1 0 10 Finally, you need to create the following file (tex.cal): { A dumb texture mapping file } pass_dx(dx,dy,dz)=dx; pass_dy(dx,dy,dz)=dy; pass_dz(dx,dy,dz)=dz; nopert(dx,dy,dz)=0; This just repeats the texture with a size of 1. You can use scalefactors and different coordinate mappings to change this. If this works or doesn't work, let me know. (I have NEVER tried to map textures in this way, so you will be the first person I know of to use this feature.) -Greg Date: Fri, 23 Oct 1992 00:32:13 +0800 From: Simon Crone To: GJWard@lbl.gov Subject: Texture-data Greg, hello again, Well, the good news is that the texture mapping works! I've converted the raytrace water bump map from RLE format to radiance PIC and used the pvalue program to create a large data file. A small C program then converts this data into the separate x and y perturbation files. The example of the data file you suggested needed a bit of a modification. It needed to be: 2 0 1 height 0 1 width dx00 dx01 dx02 ... dx0(width -1) dx10 dx01 dx12 ... dx1(width -1) . . etc i.e. the data was one array too wide and high. The texture works well and is easy to adjust both in the tex.cal function file and through normal transformations etc. The only drawback is that the size of the data files can be quite large and radiance takes a while to read in and store all the data. For example the water.rle bump map (a 256x256 image) takes up 203486 bytes. The water.dat file generated from pvalue is 4194351 bytes. The xpert.dat and ypert.dat files are each 655379 bytes. As to your queries on the raytrace program ... It is the Computer Animation Negus Raytracer (CAN) developed at the School of Computing Science, Curtin University of Technology, Western Australia. I am not sure of its cost but you can get more information from the following mail address: raytrace@cs.curtin.edu.au -Simon Date: Thu, 22 Oct 92 10:12:11 PDT From: greg (Gregory J. Ward) To: crones@cs.curtin.edu.au Subject: Re: Texture-data Hi Simon, I'm glad to hear that it works! Sorry about my error in the data file description. Yes, the data files are large and not the most efficient way to store or retrieve data. Sounds like yours is taking about 10 bytes per value. Different formatting might reduce this to 5 bytes per value, but that's about the best you can hope for. In most cases, the ASCII data representation is preferable for ease of editing and so on. (Data files are most often used for light source distributions.) The main exception is pattern data, for which I allow Radiance picture files, as you know. Since you are currently the only one I have heard from using texture data, it doesn't seem necessary at this point to create yet another file type to hold it, and I don't favor using a picture format to hold other types of data. (The Radiance picture format doesn't allow negative values, for one thing.) -Greg ========================================================= CSG - Using antimatter type for destructive solid geometry Date: Fri, 30 Oct 92 16:37:32 PST From: rocco@Eng.Sun.COM (Rocco Pochy) To: greg@hobbes.lbl.gov Subject: radiance question I just stared playing around with radiance and have ran into a problem trying to create a sphere with a missing slice (i.e like and orange slice). How would you go about implementing this feature? Something like a CSG subtraction... Looks pretty hot from what I've seen... R. Date: Fri, 30 Oct 92 17:17:06 PST From: greg (Gregory J. Ward) To: rocco@Eng.Sun.COM Subject: Re: radiance question Hello Rocco, Radiance does not support CSG directly. There are two ways to create an orange with a wedge missing. The easiest is to use gensurf to make a boundary representation (using Phong-smoothed polygons) like so: !gensurf 'cos(5.5*s)*sin(PI*t)' 'sin(5.5*s)*sin(PI*t)' \ 'cos(PI*t)' 20 20 -s The value 5.5 is instead of 2*PI to get the partial sphere. You may have to use a couple of polygons or rings if you want the sliced area to be solid. The sphere here will have a radius of one, centered at the origin. You can use xform to size it and move it from there. The second way to get an orange with a wedge missing is to use the antimatter type to "subtract" the wedge from a real sphere. The description might go something like this: void plastic orange_peel 0 0 5 .6 .45 .05 .05 .02 void antimatter orange_slice 1 orange_peel 0 0 orange_peel sphere orange 0 0 4 0 0 0 1 !genprism orange_slice slice 3 0 0 2 0 2 1.5 -l 0 0 -2 \ | xform -t 0 0 1 Genprism makes a triangular prism to cut the wedge from the sphere. This will make a slice using the same material as the peel. If you want a different material there, you can prepend your material to the list of string arguments for orange_slice. Note that there are problems with the antimatter type that make the the gensurf solution preferable if you can live with it. Hope this helps! -Greg ~s Radiance Digest, v2n5, Part 1 of 2 Dear Radiance Users, It has been nearly a year since I've sent out a Radiance Digest, so there's rather a lot of material to sift through here. As usual, I've tried to break things up into somewhat coherent subjects, but in the process I've collected e-mails that were originally separated by many months. I hope this doesn't cause any undue confusion, and cronology is at least maintained within each section. A good part of the delay in this edition is the delay in a release of Radiance itself. Since I was discussing in some cases as-yet- unreleased features, I didn't want to taunt the greater user population with things they couldn't even try. Now that 2.3 is out, there's nothing to stop us... Since there is so much here, I've broken it into two parts to avoid the 100K limit of some mailers. These are the topics covered in this mailing: SKY SOURCES AND RGB - Dealing with the sky and what is RGB, anyway? CAD TRANSLATORS - How to get to Radiance from CAD systems WIREFRAME - Generating wireframe renderings w/o CAD SHARED IMAGES - Some pictures to share by J. Mardaljevic SENSITIVITY RUNS - Ambient (-a*) parameters and accuracy USING BRDF DATA - How to apply BRDF data in Radiance IES SOURCES - The standard IES format and source library Enjoy! -Greg =============================================================== SKY SOURCES AND RGB From: georg@ise.fhg.de Subject: light and glow To: GJWard@lbl.gov Date: Wed, 6 Jan 93 19:12:24 MEZ Hi Greg! The third mail this day, but there is something, I don't understand: glow and light give different results with rtrace ( light is two times the glow). Is this the case in general, or do I have to take care which of both I use ? Here is a small test ( the camera looking upwards) : #glow: void glow sky_glow 0 0 4 1 1 1 0 sky_glow source sky 0 0 4 0 0 1 180 gives: oconv test.rad rtrace -x 1 -I -ab 1 SOFTWARE= RADIANCE 2.1 official release May 20, 1992 FORMAT=ascii 3.141593e+00 3.141593e+00 3.141593e+00 ----------------------------------------------------- #light: void light sky_glow 0 0 3 1 1 1 sky_glow source sky 0 0 4 0 0 1 180 gives: oconv test.rad rtrace -x 1 -I -ab 1 SOFTWARE= RADIANCE 2.1 official release May 20, 1992 FORMAT=ascii 6.283185e+00 6.283185e+00 6.283185e+00 Date: Wed, 6 Jan 93 11:40:38 PST From: greg (Gregory J. Ward) To: georg@ise.fhg.de Subject: Re: light and glow Dear Georg, The value for glow is more accurate. You simply cannot use a light source that takes up the whole sky, because it will not be calculated correctly. In order for the source calculation to work for the sky, it would have to subdivide it into many pieces. Although this is theoretically possible and even feasible within Radiance, it is preferable to treat such large sources as part of the indirect calculation, since they do not cast strong shadows. I hope this answers your question. -Greg From: mcardle@eol.ists.ca (Steve McArdle) Subject: Radiance Package To: gjward@lbl.gov Date: Wed, 4 Aug 1993 12:43:38 -0400 Greg Ward I'm trying to get specific information regarding the radiance programs. I'm in the process of trying to simulate data for a forest scene in clear sky's and under cloud to determine the apparent reflectance of a given area. However, because of limited documentation on not sure if the program is capable of performing these tasks. I was wondering if you had technical information on formulas, specific wavelengths used for RGB, or any assumptions. If you do not maybe you could direct me to where I might find this information out. The work I'm doing is related to part of M Sc. thesis work studing effects on reflectance under varying illumination conditions any help would be much appreciated. Thank Steven McArdle York University Toronto, Ont Canada mcardle@eol.ists.ca -- Date: Wed, 4 Aug 93 10:05:32 PDT From: greg (Gregory J. Ward) To: mcardle@eol.ists.ca Subject: Re: Radiance Package Hi Steven, There is a document in the Radiance distribution called "materials.1" in the ray/doc directory that gives the formulas used for lighting calculations. I suggest you look there first. I make no assumptions about what exactly is meant by the red, green and blue components, except that these are the components given to the display device. For the purposes of calculation, you may assume that they correspond to total energy (and are equal) or represent parts of the infrared spectrum. Radiance really doesn't care as it contains no spectrum-specific computations. -Greg From: mcardle@eol.ists.ca (Steve McArdle) Subject: Radiance help To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Wed, 25 Aug 1993 20:28:03 -0400 Hi Greg I am running into a slight problem using gensky command and I don't really have anybody to turn to for help. We here have version 2.1 which has been ported to Hp Apollo 9000 series 700, I have some documentation but the information describing the features are limited. I need your help to verfy that I'm on the right track and if I understand what's going on. To refresh, I am creating a simple forest scene of cone shape trees for clear sky and a cloudy sky. I am using gensky commmand to describe the distribution of the light but I am not sure if I am using it right. Here is the file !gensky 7 23 8 +s -a 45.5 -o 79.8 -m 75 skyfunc light sky 0 0 3 305 331 291 sky source skylight 0 0 3 0 0 1000 180 this is the file I used to create a clear sunny day on july 23 8 am at that lat/long with radiance value given by skyfunc. I have separate files for the ground and the trees. Now my questions to you are; [1] what does skyfunc describe is it the radiance value of the light source or is it suppose to define the sky radiance. [2]also is the radiance value at the top of the atmosphere or at the ground or where the source is defined (z=1000) [3] does gensky work out the radiance value for that position or just the distribution pattern and I am suppose to input the radiance value [4] is the source describing the surface of the sky and does it matter if set the z component to 1 or 1000 [5] should I be using light or glow? I used the same command to create a cloudy sky except instead of +s I used -c for CIE distribution. How ever I got some pretty high radiance values. [6] For the cloudy sky I take it that I am suppose to adjust the light source for radiance values for light coming beneath the cloud. [7] how do I define the suface of cloud some other questions [8] I asume that the RGB wavelengths are the CIE wavelengths [9] in using rview and rpict there is an option for ambient light -av. Is this option suppose to represent the sky radiance or the ambient light between the objects i.e cones [10] this is good one what would be the number of indirect light calculation for a real seen -ab 10 ,20 ,30 ? I guess you could say I'm a little confused and I would gladly appreciate any help you could give me. Steve -- Date: Thu, 26 Aug 93 17:46:06 PDT From: greg (Gregory J. Ward) To: mcardle@eol.ists.ca Subject: Re: Radiance help Hi Steve, The gensky call you gave as an example was fine, but what followed it was a little bit off. You had: skyfunc light sky 0 0 3 305 331 291 sky source skylight 0 0 3 0 0 1000 180 A more sensible description is: skyfunc glow sky 0 0 4 .9 .9 1.2 0 sky source skylight 0 0 3 0 0 1 180 A sky source should be of type glow rather than light (question 5), and the RGB values are actually modifying the sky radiance, as determined by the "skyfunc" description produced by gensky (question 3). Gensky with the +s option (default) produces both a description of the sun and the sky radiance distribution, although the latter is not actually applied to anything in gensky (question 1). To specify a zenith radiance other than the default determined by gensky from solar altitude, sky type and atmospheric turbidity, use the -b option of gensky (question 2). The third real argument in the source descripiton is merely the z component of the direction vector, and has nothing to do with radiance values. Since the direction vector gets normalized, it actually doesn't matter what positive value you give for z if x and y are zero (question 4). The CIE cloudy sky is actually full overcast, ie. there are not clouds visible and no "under cloud" radiance changes (question 6). It is simply a smoothly varying function that peaks at the zenith and drops steadily to one third this value at the horizon. There are no clouds and no cloud surfaces in a Radiance sky. If you wish to add your own pattern to the skyfunc distribution given by gensky, you may use and of the brightfunc, colorfunc, brightdata, and colordata primitives to create a variation in radiance as a function of sky direction (question 7). Question 8: The RGB units are typical computer graphics monitor phosphors, not CIE XYZ If you want to convert from XYZ to RGB or vice versa, you may use the routines in src/common/spec_rgb.c. Question 9: For exterior scenes, use the value suggested by gensky for the -av parameter of rpict or rview. For example, "gensky 7 23 8 +s -a 45.5 -o 79.8 -m 75" produces the following file: # gensky 7 23 8 +s -a 45.5 -o 79.8 -m 75 # Solar altitude and azimuth: 30.687400 -88.286823 # Ground ambient level: 15.379269 void light solar 0 0 3 6.19e+06 6.19e+06 6.19e+06 solar source sun 0 0 4 0.859580 -0.025710 0.510354 0.5 void brightfunc skyfunc 2 skybright skybright.cal 0 7 -1 7.64e+00 1.52e+01 4.04e-01 0.859580 -0.025710 0.510354 The suggested ambient level is 15.379heyaretherereallythismanydigits, so you might run rview with: rview -av 15.4 15.4 15.4 ... octree Question 10: I don't think I've ever needed a value for -ab above 4, and for exterior scenes, -ab 1 is perfectly adequate. What documenation do you have, by the way? -Greg From: apian@ise.fhg.de Subject: RGB nm values ? To: gjward@lbl.gov (Greg Ward) Date: Fri, 12 Nov 1993 15:36:18 +0100 (MEZ) Hi, Have I overlooked something or are there any hints where (in terms of nanometer) the 3 channels (RGB) should be ? The raytracing itself is probably independent, how about gensky, ximage, conversion watt->luminance ? (probably a very naive beginner's question) :-) Peter -- ---------------------------------------------------------------------- Peter Apian-Bennewitz apian@ise.fhg.de Fraunhofer Institute for Solar Energy Systems Tel +49-761-4588-123 (Germany) D-79100 Freiburg, Oltmannsstrasse 5, Fax +49-761-4588-302 ---------------------------------------------------------------------- Date: Fri, 12 Nov 93 08:00:14 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: RGB nm values ? Hi Peter, This is not really a naive question, and I'm afraid I don't have a very good answer for you. The RGB values are defined in terms of average chromaticity coordinates of a computer monitor, and I don't have any corresponding spectral curves. The conversion between watts and lumens is simply a constant, 179 lumens/watt. This constant was derived by integrating white light over the visible spectrum, but if you try to reproduce this result yourself, you're likely to get a slightly different answer because it depends keenly on what is considered visible (ie. the limits of integration). In the end, the constant is not terribly important because it gets applied at the beginning to get radiance from luminance, then inversly at the end to get lumiance from radiance. As long as the same value is used, the result is independent of the constant chosen. This is the short answer. The long answer would require more space and more thought on my part! -Greg ============================================================== CAD TRANSLATORS Date: Mon, 11 Jan 93 10:08:44 PST From: greg (Gregory J. Ward) To: raydist, raylocal Subject: Radiance CAD translators Mark Wright of the Speech, Vision and Robotics Group at Cambridge writes: Hello, Do you know a path from the IGES or VDA-FS CAD data formats into the radiance raytrace? I want to construct a number of fairly complex 3D models from HP's ME30 3D CAD system to rayshade or Radiance ray tracers. The ME30 package outputs IGES or VDA-FS. I know packages that will take nff, off files and output to the the raytracer. I am looking for a public domain/shareware filter or package that has this ability built in. -------------------- If you know of any such translator, or have written a translator yourself from any popular CAD format to Radiance, I think we'd all like to know. Please send mail to me (GJWard@lbl.gov) if you have a translator that you would be willing to make available for free or for a price, with or without support, to a larger audience. Thanks! -Greg Date: 11 Jan 1993 16:13:21 -0800 From: "Matthews, Kevin" Subject: RE: Radiance CAD translators To: "Gregory J. Ward" Greg, Regarding your inquiry about CAD translators to Radiance, our software DesignWorkshop(tm) will be capable of supporting Architrion text and DXF to Radiance, with WYSIWYG view specification transfer from the DesignWorkshop dynamic viewing environment (along with the geometry file we export canned shell scripts for rview and rpict with default parameters-actually you might like to comment on the params we've come up with so far). DesignWorkshop definately falls into the "for a price" and "supported" categories, although our academic price is $145 per single user license. At the academic price DW might be cheap enough for someone to get it just to use for translation, although it would be a little funny, and it mostly duplicates capabilities you already support. DW makes the translation more interactive (as well as being a great modeler in its own right). Information on DesignWorkshop follows in case you or your correspondents are interested. Additional info on request... Regards, Kevin ______________________________________________________________________________ DesignWorkshop(tm) ... three-dimensional modeling for conceptual design and design development DesignWorkshop(tm) brings the simplicity of the classic Macintosh drawing interface for the first time to architectural design modeling and spatial visualization. Solid objects are created in live perspective or orthographic views by simple three-dimensional dragging with the mouse, and moving, resizing, and extruding are all accomplished in the normal selection mode without any special tools. It's the fastest legal way to model your building! Modeling o fully three-dimensional direct-manipulation Mac-style interface o graphically create and reshape cuboids, cylindrical columns, extruded arches and mouldings, contour site models, etc. o click-and-drag in any view to create and resize openings in solid-object walls o sophisticated internal technology- feature-based solid-modeling with intelligent polygonal BREP objects and floating-point coordinate accuracy o 3D object snaps, paste PICT image onto face of 3D object, etc. Viewing o drag with the "eye" and "look" tools for fully dynamic design-oriented view adjustment o two-dimensional and three-dimensional zoom o plan, section, elevation, perspective, and axonometric views, all fully editable o shaded sections without cutting model, poched automatically o open multiple documents with multiple windows for each document Rendering o wireframe, hidden-line, flat shaded, and shadow-cast in 32-bit color o sun studies rendered in parallel and recorded directly as QuickTime movies o walkthroughs by view list with variable interpolation, saved as QuickTime Interchange o full clipboard support-copy and paste from 3D to 3D, 3D to 2D, or 2D to 3D o import and export Claris CAD, PICT, Architrion, DXF, Radiance formats o publish views from 3D to 2D for drafting, images for presentation, data for analysis Output o print current window at any time with any standard Mac QuickDraw or PostScript printer o save current window at any time as an object or bit-map PICT file. Objects and Data o read and edit object dimensions, parameters, materials in Object Info windoid o create live data links to external applications General o built-in designer's markup pencil and eraser-markups print and save with image o straightforward site modeling and contour editing Available o First release shipping 2-93. List price $895. (Quantity and academic pricing available). o Money-back satisfaction guarantee, 90 days o Special Pre-Release Price, $295, available through 1-31-93 o A pre-release purchase gets you software now, and includes the full release version as soon as it's available. For more information on DesignWorkshop(tm), or to order, contact Artifice: Artifice, Inc. P.O. Box 1588, Eugene, OR 97440. 503-345-7421 voice, 503-346-3626 fax, AppleLink D3624, Internet dmatthew@oregon.uoregon.edu Date: Mon, 11 Jan 93 21:46:01 PST From: chas (Charles Ehrlich) To: greg Subject: Re: Radiance CAD translators Greg, Just about any 3D CAD format can be used by Radiance...in one form or another. My favorite way to translate data from out-of-the-ordinary sources is to use the Macintosh software called CADMover by Kandu software. An IGES file, for example, can be translated into a DXF file, and then the DXF file translated into Radiance with DXFCVT. I have had good success with this process. I reccommend that Radiance users separate their 3D geometry files by material type thereby facilitating the process of defining surface material properties. Any questions, call 415 882-4497. Leave a message telling me when and where I can call you, collect. Chas Date: Wed, 13 Jan 93 14:32:28 +1100 From: angus@cgl.citri.edu.au To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: Radiance CAD translators to the best of my knowledge, there are no public-domain implementations of IGES readers, probably because the standard is the size of a telephone- book (it was written by committee - need i say more). i wrote the beginnings of an IGES reader a couple of years ago for my boss, but the project was shelved. _any_ implementations of IGES readers are likely to be incomplete due to the number of different primitives & cases in the standard. i have recently seen a number of postings on the net looking for public- domain IGES readers, and they have all resulted in failure. .angus. angus@cgl.citri.edu.au Date: Sun, 22 Aug 93 16:34:52 EDT From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: gjward@lbl.gov Subject: radiance greg - how's it going? you can add to your list of appearances a slide in the 1992 SIGGRAPH Educator's Slide Set -- we did use the conference room slide and image pair. what do you use for modeling/scene building? we've found the steps needed to do even simple texture mapping real difficult. hate to say this, but it's just not very intuitive. have others commented on this? is there any relief/tips/etc? has there been any further development of RADIANCE since v2r1 in may 1992? thanks! steve spencer Date: Sun, 22 Aug 93 19:01:26 PDT From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: radiance Hi Steve, I still rely heavily on vi for scene building myself, though others working closely with me use Vision3D or Architrion on the Mac, or AutoCAD. Vision3D is by Paul Bourke and has a Radiance file output option. The software is available in the pub/mac directory on hobbes.lbl.gov. The text output format of Architrion is translated by arch2rad, but unfortunately, the text format has been abandoned in favor of DXF in more recent releases of the software. Someone in Zurich has written a highly-touted translator that runs within AutoCAD, and is available in the pub/translators directory on hobbes. I've heard many good things about this translator from folks who've used it, but I haven't had much chance to apply it myself. I'm not much of a CAD user, I'm afraid. I agree that adding textures and patterns to a Radiance description is far from intuitive. In an effort to be general, I made things a bit too cryptic. On the other hand, the input language was not really meant to be the interface to average users. It has ended up that way, though, due largely to a lack of funding for a front end. It wouldn't be so bad if I'd at least managed to document stuff, but I haven't even gotten enough money for that. (As an alternative to the Department of Energy, I've been trying to interest a publisher in doing a Radiance book. So far, I haven't had much luck.) I'm amazed that despite all the obstacles, there are still some users who manage to figure out how to do most things with little or no help from me. If you give me an example picture or something you want to apply and the surface you want it applied to, I'd be happy to whip it out for you. Regarding new Radiance developments, I haven't just been sitting on my thumbs over here. In fact, I've been ready to go with release 2.2 for over six months, but the DOE won't let me release it. They're in the process of deciding how to "market" Radiance, and are concerned about the large public distribution we've had to date. Version 2.2 will have two significant and many minor improvements over 2.1. The first important change is the addition of techniques for parallel and network distributed rendering. The second change is the addition of a new executive program for running oconv, rpict, pfilt and rview that sets options more intelligently based on user input of qualitative information. Hopefully, the DOE will come to their senses in the next few months and we'll make a new release available. In the long run, I'm looking for some way to completely redesign the Radiance input format to clear up a lot of the confusing and irrelevant aspects of the current format in favor of something more general and "programmable". Still, I don't expect vi to be the input modeler of choice for most people... -Greg Date: Mon, 23 Aug 93 09:48:43 EDT From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: radiance Thanks for the lengthy and speedy reply, Greg. Sorry to hear that DOE is wondering about what to do about all the happy RADIANCE users out there. Can we write letters to someone? We don't have AutoCAD here, unfortunately, so can't use that translator. I do recall, though, you touting it as a really cool filter. > I agree that adding textures and patterns to a Radiance description is far > from intuitive. In an effort to be general, I made things a bit too cryptic. Ahhh, the 'general-purpose-solution-makes-no-one-really-happy' thing. As you may recall, we use our own scene-description program, and send the output of it through a program "ani2rad" which generates a ".rad" file. I'm probably going to rewrite that stuff, though, since we're using a new scene-description (keyframe animation, actually) program here these days, and having *that* program write a ".rad" file directly. There is a bit of "vi" or "emacs" being used as a front end to Radiance, though, to massage the ".rad" file before Radiance uses it, for two reasons: (a) lining up the texture maps, and (b) adding in Radiance features that my program doesn't handle (like spotlights, though that will change in this new program). I'm sure you'll let us all know when 2.2 is allowed to be released. Let me know if there's letter-writing that could help the situation. Agreed, a new interface would be wonderful. Have you considered having Radiance read RIB (Pixar's RenderMan) files as input? Stephen N. Spencer Graphics Research Specialist ,__o spencer@cgrg.ohio-state.edu spencer@siggraph.org ---- _-\_<, (614) 292-3416 ---- (*)/'(*) "and the things we need the most to say are the things we never mention" - E.S. Date: Mon, 23 Aug 93 08:44:37 PDT From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: radiance Hi Steve, Yes, I have had a look at RenderMan. It's a very advanced interface when it comes to geometry, and the shading offers many advanced features, but it's completely non-physical. In fact, it is very difficult to produce a physically correct model with RenderMan, as the parameters used in its reflectance model have no physical meaning. RIB is just a sequence of specialized subroutine calls, so being fully RIB compliant means adopting the RenderMan renderer (or at least their methods). -Greg Date: Mon, 23 Aug 93 11:49:47 EDT From: spencer@cgrg.ohio-state.edu (Stephen N. Spencer) To: greg@hobbes.lbl.gov Subject: radiance Agreed -- RenderMan's not physically based at all. Still spits out neat pictures, though, if that's what you want. I was suggesting it more for the scene description format, but if it's not able to describe a scene sufficiently then that's not an option. steve Date: Mon, 23 Aug 93 08:56:10 PDT From: greg (Gregory J. Ward) To: spencer@cgrg.ohio-state.edu Subject: Re: radiance The scene geometry is very well-defined in RenderMan. Better, in fact, than can be understood by Radiance. (More higher order primitives, etc.) Others have used the RIB exporters of ArchiCAD, for example, to produce Radiance input. Writing a completely general RIB translator is much more difficult, however. Date: Sat, 30 Oct 93 18:01:19 GMT From: jon@esru.strathclyde.ac.uk (jon) To: greg , mike.donn@vuw.ac.nz Subject: availibility of esp-radiance translator ESP-r - Radiance Desktop Jon Hand A recent enhancement of the ESP-r system has been the creation of a link to the Radiance lighting simulation pack- age authored by Greg Ward of LBL. This takes the form of a program called "rad" which is able to take an ESP-r problem configuration file and to generate not only the sky, outside and room descriptive files used by Radiance but to act as a desk-top which drives many of the Radiance executables. At present rad picks up the surface properties (reflec- tion is greyscale at present) from the ESP-r definition and creates colours for each surface while building the files. It executes several of the radiance executables to build the sky definition, octrees etc. Only in the case of not quite planer surfaces (radiance is much tighter on this than ESP-r) have we had to resort to editing the descriptive files. Currently we have generated external views of a number of our thermal simulation prob- lems and getting to the point of starting up rpict usually requires about five minutes. Of course if non-grey images or surface textures are required there would be additional user intervention required. On the todo list are: 1) picking up glazing transmission characteristics from our optical database (a general default transmission is used currently), 2) facilities related to describing lighting fixtures, furniture and the like (as this sort of clutter does not often get described for thermal simulations) sorting out interior views that have adjacent rooms which see each other (case of picking up more topology information from esp-r). In order that others might test and comment on the facilities I have placed on the Strathclyde ftp server the rad executable (Sun4 requires X11R5) and two radiance file sets and a few pic files. If you are interested please email esru@strathclyde.ac.uk for instructions on picking up the files. Regards, Jon Hand October 30, 1993 Date: Sun, 31 Oct 93 08:28:07 PST From: greg (Gregory J. Ward) To: jon@esru.strathclyde.ac.uk Subject: Re: availibility of esp-radiance translator Great, Jon! Now, I just have to get you to change the name! The next release of Radiance, due in a week or two, has a new executive program called "rad" also. I expect that you will want to use this at least a little in your own interface, as it has some built-in intelligence for setting rendering parameters, using ambient files, recovering aborted images, etc. -Greg Date: Sun, 31 Oct 93 16:37:16 GMT From: jon@esru.strathclyde.ac.uk (jon) To: greg Subject: rename and new version of radiance Greg, I would be happy to change the name - perhaps e2r or something. Keep me posted on when the new version comes out and how to access it. Jon Hand ================================================================== WIREFRAME Date: Thu, 28 Jan 93 16:27:00 NDT From: pdbourke@ccu1.aukuni.ac.nz (Paul David Bourke) Subject: wireframe from Radiance To: GJWard@lbl.gov (GJWard@Lbl.GOV) I've uploaded a little something which may be of interest to some people, who knows. The following is the README file in the tar archive wireframe.tar located in the xfer directory. You can decide where it should go if anywhere. Creating wireframe hiddenline images with Radiance One problem we have encountered is that if we create a model "by hand" with Radiance (as opposed to using a 3D modelling package and then a translator) then we are often disappointed that there is no way of creating a wireframe hiddenline version, plans, elevations, perspectives etc. After a little playing I have found a way of doing this, the procedure is explained below. 1) xform -e the model so that you have a file consisting only of Radiance primitives. 2) Run the short (and messy) program supplied with this archive. It creates a new Radiance file where each primitive has a separate colour (color for the US) and there are no light sources, textures, etc. 3) Render the model as usual with typically high values of ambient light, 0.5 say. The image should also be rendered to a large size bitmap. This will result in an image with flat shading on each primitive, and each one will be a different colour. In other words there is a transition or edge between each primitive. 4) Run some edge detection over the resulting image. We use PhotoShops "find edges" 5) Convert to grey scale, possibly invert and adjust the levels of the image (depends on the edge detection used) and then when a suitably high contrast image is obtained convert to a black and white bitmap. I have enclosed wirerad.c source to colour primitives bill.rad example scene wire.rad output of bill.rad after colourization wire.tif run this for image generation wire.tif example wireframe image from above example Oh well, it was fun and someone else may find it useful. ------------------------------------------------------------------------------ Paul D. Bourke School of Architecture, Property, Planning pdbourke@ccu1.auckland.ac.nz The University of Auckland Private Bag 92019 Ph: +64 -9 373 7999 x7367 Auckland Fax: +64 -9 373 7410 New Zealand =============================================================== SHARED IMAGES Date: Wed, 10 Feb 93 16:29:31 -0800 From: greg@pink.lbl.gov (Gregory J. Ward) To: greg@hobbes Subject: New images for archive =========================================== = = = Images Created by the ECADAP Group, = = De Montfort University, UK, = = Using the RADIANCE Synthetic = = Imaging Software. = = = =========================================== The Scenes ---------- Visualisations of a low-energy urban office scheme (following a detailed daylighting analysis) FOGGO_DOWN.pic : Outside view down FOGGO_LIFT.pic : View to lifts FOGGO_NITE.pic : Nightime view, using 'light' (main) and 'glow' (offices) FOGGO_POOL.pic : "Fisheye" view up to lifts from just below water surface FOGGO_UP1.pic : Main view of atrium FOGGO_UP2.pic : Detail of main view Design for daylit art gallery - rooflight design with shading devices GALLERY.pic : View to paintings GALLERY_LUX.pic : Illuminance map of view Hi-tech atrium building ROPE_ATRIUM.pic : Main view of atrium ROPE_OFFICE.pic : Office adjacent to atrium Scene loosely modelled on Vancoover Law Courts atrium VANC_DAY.pic : Sunny day VANC_NITE.pic : Nightime with lights Images are copyright ECADAP, De Montfort University, UK. They must not be used for any publication etc. without prior authorisation. No CAD modeller was used for any of the scenes. J. Mardaljevic greatfully acknowledges the excellent support provided by Greg Ward in helping to understand RADIANCE and to use it to the best effect. Address any queries etc. to John Mardaljevic. --------------------------------------------------------------------- Environmental Computer Aided Design And Performance - ECADAP --------------------------------------------------------------------- John Mardaljevic ECADAP Group E-mail(int): edu@dmu.ac.uk School of the Built Environment E-mail (UK): edu@uk.ac.dmu De Montfort University The Gateway Tel: +44-533-577417 Leicester LE1 9BH U.K. Fax: +44-533-577440 --------------------------------------------------------------------- ========================================================== SENSITIVITY RUNS Date: Wed, 17 Feb 1993 08:50:59 -0800 From: "(Raphael Compagnon)" To: greg@hobbes.lbl.gov Subject: Sensitivity analysis Hello Greg ! I did some sensitivity analysis on the ambiant calculation parameters. Here are some first results : The goal of this sensitivity analysis is to give some help how good intereflection calculation can be performed by Radiance without spending an enormous CPU time. The idea is to perform many simulations of the same scene with different values for the parameters controlling the ambiant calculation. Then all resulting pictures are compared to a "reference picture". Then the effect of each parameters can be estimated. The scene that has been used is a closed room with a small window in the center of the ceiling. (it is exactly the same scene proposed by H. Rushmeyer that made some inter-program comparison last december) The reference picture is the final picture received from H. Rushmeyer: it is in fact a mean picture calculated from results of at least 3 different programs. The good accordance between the results of those 3 programs insure that the "mean picture" is a good estimate of the "reality". Each picture that has been computed during this study has been compared to the refernece picture by calculating its root mean square difference. RMS = sqrt( Sum_on_all_pixels( (Picture_pixel(i) - Ref_pixel(i))^2 )/Npixels ) RMS is then a measure of the accuracy of the calculated picture compared to the reference picture. The parameters that have been tested are the following (with their minimal and maximal values): parameter Level - Level + -aa 0.2 0.1 -ar 32 64 -ad 256 512 -as 128 256 -ab 1 2 -av 0.0169 0 All combinations of the two levels (-;+) of these six parameters have been computed using a factorial scheme 2^6 resulting in 64 simulations. >From the results of these simulations the following conclusions can be stated: The far most sensitive parameter affecting the accuracy is the ambiant bounce number (-ab) The secondary most affecting parameters is the -av values. All the others parameters don't affect significantly the accuracy ! The CPU time needed for computing the picture is proportional to the number of rays traced for each simulation. This number of rays is very sensitive to the value of the -ab and -aa parameters and is also sensitive to the value of the -ad parameter. Those sensitive parameters show also strong interactions between them: that means that setting two of those parameters at their upper level will increase the CPU time far more than just setting on of this parameter to its upper level. The three other parameters (-ar -as and -av) don't influence strongly the number of traced rays. Considering this we can see that good results can be archieved by increasing -ab from 1 to 2 but by fixing -aa and -ad to their lower levels so that the cpu time don't increase to much. A good estimate of the ambiant value -av is also something that will provide accuracy without increasing the cpu time but this estimate can not easily be found... Those conclusions are valid for this specific scene and for parameters lying in the same lower and upper levels that have been used for this sensitivity analysis ! I must still insure that the conclusions I explained here are still valid for other scenes... I hope this first trial will help to find out some kind of "rules" to define best values for the ambiant parameters! Tell me if you have a better idea to compare the accuracy of one picture against a reference picture. Bye ! Raphael Date: Wed, 17 Feb 93 15:46:55 PST From: greg (Gregory J. Ward) To: compag@lesosun2.epfl.ch Subject: Re: Sensitivity analysis Hi Raphael, Your sensitivity analysis is interesting. I will publish them in the Radiance user's digest, with your permission. Unfortunately, I have some doubts that the results will extend to other environments. I have found the setting of these parameters to be something of an art, and the lower values certainly do not work in all cases. It would be very interesting, and very difficult, to determine just when they were needed. I am thinking that there should be an "oracle" program that could examine the input files and the octree and so on and make a recommendation for viewpoints, parameter settings, etc. -Greg [This was the first germ of the idea for "rad"] ================================================================ USING BRDF DATA From: sick@ise.fhg.de Subject: Re: Question concerning BRTDfunc's To: greg@hobbes.lbl.gov (gregory ward) Date: Fri, 19 Feb 93 8:30:55 MEZ Hi Greg, thanks for your examples - they help. But as you assumed I have some questions still. I believe I could most easily work with the data, eg transdata material types. I do not see, however, so far the relationship between the datafile and the functions eg in your reflector example. In order to understand, it would be probably sufficient for me to know the meaning of the data in the sae_refl.dat file. I could then figure out the rest. How can I relate to the direction of the incident light? There is no pre- defined vector in rayinit.cal, is there? I read x,y,z in some places, but they seem to be general variables. What exactly are coordinate inices and coordinate index functions? I hope I do not take too much of your time. As feedback for you: There are more and more people working with RADIANCE here in teh department. And the more we find out about it and its proper use the better. I was just recently invited to give a talk and paper (for publication in a little book) on daylighting simulations. Ans as you can guess: RADIANCE and examples calculated using it will make up the major part of the talk. So there is spreading of the information. Best regards, Fred ---------------------------------------------------------------------------- Friedrich Sick MAIL : Fraunhofer Institute for Solar Energy Systems Oltmannsstr. 22 D 7800 Freiburg, West Germany PHONE: +49 (761) 4014 133 FAX: +49 (761) 4014 132 EMAIL: sick@ise.fhg.de ---------------------------------------------------------------------------- *** NOTE: NEW FAX NUMBER: +49 (761) 4014 132 *** ---------------------------------------------------------------------------- Date: Fri, 19 Feb 93 09:06:12 PST From: greg (Gregory J. Ward) To: sick@ise.fhg.de Subject: Re: Question concerning BRTDfunc's Hi Fred, OK, working just with the reflector example, we defined sae_red thusly: void metdata sae_red 5 sae_refl sae_refl.dat reflector.cal sae_theta sae_phi 0 5 1 .01 .01 .9 .00258 The first string argument above is a function that modifies the data value in the file (correcting for the projected area of the object in this case). The fourth and fifth string arguments are functions that for a given (normalized) source ray direction, compute WHICH VALUES to look up in the data file. This is a bit of a peculiar example, because we happen to have data that gives reflectance as a function of the angle to the surface normal (in degrees) and the angle between the reflected ray direction and the source incident direction. Observe the definitions for these functions given in reflector.cal: { entrance angle (source to normal) } sae_theta(x,y,z) = acos(x*Nx+y*Ny+z*Nz)*180/PI; { observation angle (view to source) } sae_phi(x,y,z) = acos(-(x*Dx+y*Dy+z*Dz))*180/PI; Again, the x,y,z parameters to these functions, as supplied by the Radiance renderer, are the normalized source ray direction. In sae_theta, this vector is used in a dot-product against the surface normal (Nx,Ny,Nz) to compute the polar angle. In sae_phi, a dot product with the incident ray direction (Dx,Dy,Dz) (directed always towards the surface) to compute the "observation angle" (ie. the angle between source ray and incident ray). These two angles are then used in order to lookup values in the following data file (sae_refl.dat): 2 0 20 3 .2 1.5 2 1.67 .026 1.12 .019 .558 .011 The first number (2) is the dimensionality of the data. The second line gives the beginning and ending coordinate index of the data's first dimension and the number of points in that dimension, 0 to 20 degrees with 3 points (ie. 0, 10 and 20 degree points). The second line gives the same information for the second dimension (ie. .2 and 1.5 degree points). The total number of points must equal the product of each dimensions cardinality, 2*3 == 6. The ordering of the following data points is such that the last dimension is being run through the fastest, ie. 1.67 .026 corresponds to (theta,phi) of (0,.2) and (0,1.5) 1.12 .019 corresponds to (theta,phi) of (10,.2) and (10,1.5) .558 .011 corresponds to (theta,phi) of (20,.2) and (20,1.5) Any values between these points is interpolated using a simple multi- dimensional linear interpolant. Data outside the given domain is computed with using linear extrapolation for a distance less than or equal to the distance between adjacent values, and falls off harmonically to zero outside that range. Thus, phi values up to 30 are extrapolated, and beyond that they fall off to zero. I hope this is clear enough. I regret that it has never been adequately documented. Perhaps one day we will get the funds we need to do a proper job of documenting the system. Until then, only the most adventurous users (like you) will ever attempt to use some of Radiance's more advanced features. -Greg ================================================================== IES SOURCES Date: Mon, 22 Feb 93 17:40:38 +1300 From: Architecture Students To: greg@hobbes.lbl.gov Subject: ies2rad output values Greg, Hi, I'm currently using the lighting simulation side of radiance to predict actual light levels in buildings that are not yet built. I have been having some problems with the modelling of luminaires from the ies library that you may be able to shed some light on. The area that is causing me trouble is matching the lumen output from ies2rad to realistic levels (or that which can be expected from a hand calculation). I have set up a test room of dimensions 6 x 6 x 3.5m and tried various configurations of light fittings from a single luminaire through to an array of nine fittings and measured the output. The luminance values gained by pressing 'l' when viewing through ximage seem to be consistently low when compared to expected values. The lumen output of ies2rad can of course be increased by using the -m option so that the levels in the test room match the expected value arrived at through a manual calculation, but to do this for every fitting in the ies library would require more than a little work. My question is, is it reasonable to expect ies2rad to produce realistic lumen output without being 'fiddled' or was it intended that every fitting would require 'manual calibration'? As a result of running several light fittings from the ies library through this calibration procedure it seems there may be a relation between the expected lumen output of the lamp and the value of the multiplication factor -m used in ies2rad. If the expected lumen output of a luminaire (obtained from a table in the ies handbook) is divided by 570 it gives you a ball park figure for -m. For example luminaire no. ies25 which has two fluorescent bulbs each with an initial lumen output of 2770 lumens (cool white), has a total output of 5540. 5540/570 gives you 9.7, and using -m 10 in ies2rad gives you near realistic luminance values. As I'm not a programmer but just a humble user, I don't know if there is an obvious mathematical routine in ies2rad that explains this. Another question I have is regarding rpict's. I rendered tests with 5 different light fittings in the test room, 2 incandescent (ies01 and ies03), and 3 fluorescent (ies25, ies30 and ies41). All used the same oconv and rpict parameters but the images with ies01 as the light source are extremely patchy, a bit like a disco floor! Do you have any idea about what causes this or how it is remedied? I think it maybe the -ar setting in rpict, but why would it be so different for two luminaires that are very similar (ies01 and ies03). I have included 2 pic files (compressed format) that illustrate this, test_ies01_9.pic has a 3x3 array of ies01 luminaires as the light source, test_ies03_9.pic has a 3x3 array of ies03 fittings. The image is a view from just below the ceiling looking directly at the centre point of the floor (so the lights are behind the viewpoint ie. out of sight). Grateful for any comments or suggestions, thanks, Nick Warring Victoria University of Wellington School of Architecture New Zealand studs@arch.vuw.ac.nz PS. I hope this isn't too big for your mailbox. Date: Mon, 22 Feb 93 15:08:02 PST From: greg (Gregory J. Ward) To: studs@arch.vuw.ac.nz Subject: Re: ies2rad output values Hello Nick, I'm glad to hear that you're using Radiance for its intended purpose. I'm sorry you've been getting unexpected results! The problem seems to be with the IES luminaire data files themselves. I took a closer look at the files, and noticed that the tables they were taken from give output in terms of candelas per 1000 lumens. Since the IES data files are exact replicas of these tables, one must multipy their values by the expected lumen output of the fixture over 1000. For example, if luminaire ies25 were expected to have a total output of 4800 lumens, you would use -m 4.8 for ies2rad to give the correct absolute levels. >From what you have said, this would still seem to leave a factor of two unaccounted for, but I have checked the results and they seem to work for me. Are you remembering to account for the reflectance of the surface in your hand calculation of luminance? Your second problem with splotchiness in your output is due to the way ies2rad generates certain fixture geometries. Ies01 in particular is a (spherical) pendant fixture. According to the IES specification, this fixture should be represented as an actual sphere. Unfortunately, the standard data file actually gives a cubical geometry for this fixture, and ies2rad interprets it thusly. The top and bottom faces are modeled as emitters, and the four side faces are modeled as glowing but otherwise passive surfaces. (This preserves the far field output distribution while minimizing the number of light sources and the associated calculation cost.) Due to how Radiance computes interreflection, the side faces do end up contributing to the illumination of the space, even though they should not. This is really a bug, and though I was aware of it before, I didn't realize it could cause such serious artifacts. I will fix it for the next release. The best fix is to change the ies01 file so it will correctly generate a spherical geometry. Alter the first line after TILT=none to read: 1 1000. 1.0 21 1 1 1 -1.00 0 0 The -1 is the funky IES way to specify a sphere. Good luck! -Greg Date: Wed, 28 Jul 1993 12:43:58 -0500 From: srouten@rubidium.service.indiana.edu To: greg@hobbes.lbl.gov Dear Greg, Reuben and I are pulling our hair out. We are trying to understand primitives for generating accurate luminaires, but neither of us has a background in engineering or physics! We have a copy of the IES Handbook but the terminology is arcane to us. Can you help us navigate an example? If so, ies01.rad is a good starting place: 1) in the header, I see '0 watt luminaire' which seems to imply that an argument within the file can be changed to correspond with 'n' watts. If so, is this argument related to 'lamp*ballast factor = 1'. In past experiments I remember changing the last argument to the brightdata primitive from 1 to a higher number and altering the overall brightness of the luminaire, but I have no idea what effect i was causing relative to watts, lumens, or footcandles. 2) In the light primitive, this luminaire has an rgb radiance of 10.7693 given in Watts/sr/m^2? Our main difficulty is understanding the units in which light is specified in Radiance. 3) Why is the geometry of ies01 specified in rectangular polygons when the actual luminaire is a spherical pendant? 4) Once we figure this out we are interested in modelling some obscure lights, like a flashing highway barricade, or perhaps even some imaginary ones. Since IES data files aren't available for imaginary lights, can we rely on Radiance to produce accurate distributions if we create the geometry of the fixture in great detail? We've been over what we think is all of Radiance's documentation, including the digests, so we're only bugging you as a last resort. Hope you dont mind. -Scott ps. I put another of our pictures in xfer. Its a project for an architectural competition. Its called glass.pic. Date: Thu, 29 Jul 93 11:46:28 PDT From: greg (Gregory J. Ward) To: srouten@rubidium.service.indiana.edu Subject: light sources Dear Scott and Reuben, Light sources are tricky buggers, and none too easy to understand in Radiance. For starters, you should realize that the IES example luminaire files are VERY bad examples. Many of the fields (such as #watts) are carelessly done or just plain wrong, and the files are mostly for testing input procedures, not for lighting simulation. The units generally used for light quantities in Radiance are watts/sr/m^2, and they are spectrally-dependent. Therefore, an incandescent fixture will have different quantities of red, green and blue compared to a fluorescent fixture with the same total output (ie. lumens). You can use the program "lampcolor" to do some simple calculations along those lines. The relationship between watts, lumens and radiance is difficult to grasp not only due to the dependence on efficacy and color, but also due to geometry. The total output of a light source cannot be determined in Radiance without considering the emitting area as well. I really don't want to go into detail, since I often get confused myself. I don't recommend using Radiance to compute light output distributions from fancy luminaire geometry. It is bound to be inaccurate. I hope someday to work on a tool for this purpose, but Radiance is not it. -Greg P.S. I took a look at your picture. It's quite nice, but I'm not sure what exactly I'm looking at. ~s Radiance Digest, v2n5, Part 2 of 2 Dear Radiance Users, Here is the second half of Volume 2, Number 5 of the Radiance Digest. You should have received the first half already. If you don't get it in a day or so, write me e-mail and I'll resend it to you. As always, I ask that you do NOT reply to this letter directly, but write instead to GJWard@LBL.GOV if you have any questions or comments, or if you wish to be removed from the mailing list. These are the topics covered in this mailing: RPICT PARAMETERS - A couple questions answered re. rpict EXTENDING RADIANCE - How to add materials/surfaces to Radiance NEW BRTDFUNC AND NEON - An extended reflectance type and neon lights LIGHT SOURCE ACCURACY - Near-field accuracy test of light sources BRIGHTNESS MAPPING - Going from simulation to display PARALLEL RADIANCE AND ANIMATIONS- New features of 2.3 and animation AMIGA PORT - Reorganized Amiga port PVALUE - Getting values from pictures BACKGROUND COLOR - Setting the background in a scene DEPTH OF FIELD - Simulating depth of field A COMPANY CALLED RADIANCE - Not mine -- someone else's! Hopefully, I won't let a whole year go by before the next posting! -Greg ================================================================= RPICT PARAMETERS Date: Sun, 28 Feb 93 09:33:01 -0500 From: macker@valhalla.cs.wright.edu (Michael L. Acker) To: greg@hobbes.lbl.gov Subject: rpict parameters Greg, Could you explain the purpose and use of the 'sp' parameter to 'rpict'? I'm a student who has been using Radiance 2.1 to do some graphics modeling as part of an independent graphics study. The scene that I am currently working with has objects as large as 100 feet across and as small as 1/2-inch wide. With sp at 4 (the default) the 1/2-inch objects are not completely represented in the image (missing parts I assume from inadequate sampling). However, when I reduce sp to 1, the 1/2-inch objects appear more complete (and smoother) though still not completely represented. Also, if I increase the ambient bounces (ab) to 1 or greater but leave all other parameters at their default, rpict produces images with very 'splotchy' surfaces. For example, white walls look as if they have had buckets of lighter and darker shades of white thrown at them. The problem appears to lessen if I reduce the surface specularity and increase the ambient super-samples (as). Could you give me some insight into the proper parameters to use that will smooth out the images? Or could you provide some example rendering statements with the parameter lists? Or do you do some preprocessing with oconv or postprocessing with other utilities, like pfilt, to smooth out (improve) the images? I'd appreciate any information you can offer. Thanks, Mike Acker,macker@valhalla.cs.wright.edu Date: Mon, 1 Mar 93 09:18:59 -0800 From: greg@pink.lbl.gov (Gregory J. Ward) To: macker@valhalla.cs.wright.edu (Michael L. Acker) Subject: Re: rpict parameters Hi Mike, In version 2.1 of Radiance, the -sp (sample pixel) parameter was changed to -ps (pixel sample) to make way for new -s? parameters to control specular highlight sampling. Since the -sp option has disappeared in version 2.1, you must either be using a previous version or a different option. At any rate, anti-aliasing in Radiance requires rendering at a higher resolution than that you eventually hope to display with, then using pfilt to filter the image down to the proper size. A set of reasonable parameters to do this might be: % rpict -x 1536 -y 1536 [other options] scene.oct > scene.raw % pfilt -x /3 -y /3 -r .65 scene.raw > scene.pic In this case, the final image size will have x and/or y dimensions of 512, and even small objects should appear smooth. As for the ambient bounces, it sounds as if the specular component of your surfaces may be too high. Non-metallic surfaces generally don't have specular components above 5%. Check out the document ray/doc/notes/materials for further guidelines. The splotchiness can be reduced by increasing the -ad and the -as parameters (as you seem to have discovered). Let me know if I can be of more help. Sometimes it is easier if you send me an example file with view parameters, if youre scene's not too big. -Greg ========================================================================= EXTENDING RADIANCE Date: Sun, 7 Mar 93 14:07:26 PST From: Mark J Young To: GJWard@lbl.gov Hello. Your name came up two days in a row for me. Yesterday I read your 1992 CG paper, "Measuring and modeling anisotropic reflection". I really enjoyed it and learned a good deal from it. I wish there were more graphics papers than had such a satisfying blend of science and graphics. Then I was reading a newsgroup posting that indicated that you have written a rendering package. So I was moved to ask your advice. Myself and a few colleagues at NYU are looking for rendering software that could be reasonably modifiable for the following purposes. We do psychophysical experiments and modeling of depth cue fusion (in the sensor fusion sense) in human 3D perception. Some of us also do modeling and psychophysics concerning color space in human perception. Those folks are looking towards extending that work to include reflectance models (of the internal kind). There is a desire to have graphics software that could generate stimuli for both the depth and color/reflectance experiments. There would need to be a variety of rendering pipeline structures, surface models, color models, and reflectance models employable. I have written a crude 3D modeler that allows me to independently manipulate the various 3D cues to a surface's shape in an abstract geometrical way. I didn't do anything sophisticated about the color representation or the reflectance model. We are at a point where I either do alot of work on this package or find a mature package that can be modified towards our needs. We would obviously like something that is extremely modular. My package is object-oriented (C++) and a paradigm like that seems very well suited to the kind of extensions we need to make. Do you know of a rendering package that is object-oriented or is of the flavor that we are looking for? Also, could you tell me where I can find "Radiance"? Thank you for any help you can give. Regards, Mark J. Young Experimental Psychology Vision Laboratory New York University NASA Ames Research Center 6 Washington Place Mail Stop 262-2 New York, NY 10003 Moffett Field, CA 94035 (212) 998-7855 (415) 604-1446 mjy@cns.nyu.edu mjy@vision.arc.nasa.gov Date: Mon, 8 Mar 93 11:10:16 PST From: greg (Gregory J. Ward) To: mjy@maxwell.arc.nasa.gov Subject: human perception modeling Hi Mark, Thanks for the compliment on my paper. Yes, I have written a fairly mature simulation and rendering package called Radiance. You may pick it up by anonymous ftp from hobbes.lbl.gov (128.3.12.38), along with some test environments, libraries and so on. The reflectance model built into Radiance is that presented in the `92 Siggraph paper. Additional hooks are provided for procedural or data-driven reflectance models, though the resulting simulation will be less complete as it will not include indirect highlight sampling (ie. highlights caused not by light sources but by reflections from other surfaces). The code is fairly modular and extensible, though it was written in K&R C for maximum portability and speed. (Also, C++ was not around when I started on the project 8 years ago.) Adding a surface model requires writing two routines, one to intersect a ray with the surface, and another to answer whether or not the surface intersects a specific axis-aligned cube (for octree spatial subdivision). Adding a reflectance model requires writing a routine to determine the reflected value based on an incident ray, a list of light sources (actually a callback routine that computes a coefficient as a function of source direction and solid angle) and other ray values as required. The rendering pipeline is very flexible -- too flexible for most people. It makes it somewhat difficult to learn and to master. There are about 50 programs altogether. Even if you decide to use your own renderer, you may find some of the programs provided with Radiance useful to your research. The one part of Radiance that is currently difficult to change is the RGB color model. I have been prodded by several people to introduce a more general spectral model, which is something I had planned to do at some point but I have been hung up by a lack of data and motivation. Right now, people can define the three color samples to mean whatever they want, but to get more samples, multiple rendering runs are necessary. I look forward to hearing more from you. -Greg Date: Tue, 20 Jul 93 13:19:52 MED From: bojsen@id.dth.dk (Per Bojsen) To: GJWard@lbl.gov Subject: Bezier Patches? Hi Greg, I was wondering how easy it would be to add support for Bezier patches as a new primitive in Radiance? What is the status of Radiance and future development of Radiance? Is it still free or is it going commercial? -- Per Bojsen The Design Automation Group Email: bojsen@ithil.id.dth.dk MoDAG Technical University of Denmark bojsen@id.dth.dk Date: Tue, 20 Jul 93 09:07:44 PDT From: greg (Gregory J. Ward) To: bojsen@id.dth.dk Subject: Re: Bezier Patches? Hi Per, Adding a new surface primitive is easy in principle. Two routines are needed: one to determine whether or not a surface intersects an axis-aligned cube, and another to determine the intersection point and surface normal for a ray with the surface. Neither one is particularly easy or difficult for Bezier patches, but I have little need for them myself. If ever I do need such a thing, I use gensurf to create a smoothed, tesselated version for me. Believe it or not, the Department of Energy STILL has not decided what they want to do with Radiance, or if they have, they haven't informed me. -Greg ================================================================== NEW BRTDFUNC AND NEON From: phils@Athena.MIT.EDU Date: Wed, 19 May 1993 18:18:50 -0400 To: greg@hobbes.lbl.gov Subject: Mirrored glass? Greg, Do you have any suggestion for specifying mirrored glass? Reflective on the outside and low transmission on the inside. Neither "glass" or "mirror" materials offer enough control of parameters. I'm not having much luck in using "trans" materials either. Have any ideas what skyscrapers are made of? Thanks, Philip Date: Wed, 19 May 93 17:36:45 PDT From: greg (Gregory J. Ward) To: phils@Athena.MIT.EDU Subject: Re: Mirrored glass? Hi Philip, I really wish I knew more about coated glazings so I could implement at good material type for them. It's not a difficult problem; I simply don't know the physics of these materials. The measurements I have access to deal only with photometric transmittance -- they don't consider color and they don't look at reflection. I suppose I could take a hack at it by assuming that reflectance either doesn't depend on incident angle (wrong) or that it follows Fresnel's law for a specific dielectric constant (possibly complex). The thing is, I'd like to do it right the first time, but I need more information. Let me do a little investigation on my end to see what I can come up with. I can't think of a good solution with the current material types unless you use the BRTDfunc and make it sensitive to orientation. I'll get back to you. -Greg Date: Thu, 27 May 93 16:18:16 PDT From: greg (Gregory J. Ward) To: phils@Athena.MIT.EDU Subject: Re: Mirrored glass? Hi Philip, I haven't forgotten about the glazing problem. In fact, I've spent a good part of the past week working on it. As it turns out, no one seems to have a very good handle on the behavior of coated glazings, including the manufacturers! Our resident window experts have been using a formula based on reflection function fits to clear and bronze glazing. I have implemented their formulas in Radiance via the BRTDfunc type. Unfortunately, I had to modify this type in the process in order to get it to work, so you'll have to pick up a new beta release which I've put in xfer/4philip.tar.Z on hobbes.lbl.gov. The function file ray/lib/glazing.cal is contained therein. [Don't try to upload this file -- it's already been incorporated in 2.3] Let me know if you need any help using it. -Greg Date: Thu, 30 Sep 93 10:39:05 EST From: TCVC@ucs.indiana.edu Subject: Radiance and neon To: greg@hobbes.lbl.gov Hello Greg, 1) I have been modelling the effects of neon in the environment. Where its direct component is irrelevant, I have used instances of 'light' polygons in order to explore the effects of shape and color in relationship to specular surfaces. This certainly creates a rapid rendering time. But now I am also interested in the direct component of neon. It appears that point sources are arrayed along any "light" or "glow" emitting surfaces. Unfortunately, I appear to have no control of their density or spacing, so that uninstanced(sp!?) neon appears like a string of christmas lights attached to the tube. Are there parameters I can adjust? I am using 1 unit = 1 foot, sometimes 1 unit = 1 meter. Most views are fairly distant. A related problem is in my representing the effect of 3 neon tubes located behind a piece of frosted glass. The glass surface is about 8 feet long and .5 feet wide. I am interested in the direct component effect onto the surface of a 6 foot diameter pillar located 1 foot in front of this narrow light box. Rather than building a model of the neon tubes with 'glow' and then placing a Translucent surface in front of it, I have tried to simply use a polygon to represent the glass, and have given it the attributes of 'glow'. The result is the effect of 4 bare lightbulbs lighting the column. How can I increase this density so that the effect is smoothed out? 2) Is the intensity of the sun in Gensky in proportion to the intensity of the output of IES2RAD? I need to merge daylight with measured electric light sources in a complex environment. Interreflection will play an important role. Should I create my own sun using an IES2RAD spotlight with the correct sun intensity at a great distance? Please share your thoughts! -Rob Date: Sat, 16 Oct 93 22:20:08 PDT From: greg (Gregory J. Ward) To: TCVC@ucs.indiana.edu Subject: Q&A Hi Rob, 1) "Coving" from extended light sources. The artifacts you are witnessing are the result of the internal limit Radiance has in breaking up large light sources. The constant is somewhere in src/rt/source.h, and I think it's set to 32 or 64 presently. You can try increasing this value and recompiling, but a better approach is to set -dj to .5 or .7 and/or break up your long light source into several shorter ones. By the way, you can use glow with a distance of zero instead of instancing "light" polygons if all you want to avoid inclusion in the direct calculation. In release 2.3 (coming soon, I hope), a negative distance for a glow type also excludes the materials from indirect contributions, which can be a problem when -ab is greater than one and the sources are relatively small. 2) Sun from gensky. Yes, gensky does produce a sun with the proper brightness, using certain assumptions about the atmospheric conditions. If you wish to adjust the value up or down, I suggest you modify the gensky output rather than trying to create the source yourself, since gensky does such a nice job of putting the sun in the right place. Gensky in version 2.3 will offer several new options for adjusting solar and zenith brightness. -Greg Date: Sun, 24 Oct 93 20:22:24 EST From: TCVC@ucs.indiana.edu Subject: new material type To: greg@hobbes.lbl.gov Hi Greg, Thanks for your note and assistance regarding consortium and "neon". I have yet to post some images for you to hopefully enjoy... maybe by Thanksgiving! There is a "material" which Radiance does not yet support and which I am finding the need for. It is similar to what is called "chroma key blue" in TV. The situation is this: I have a scanned image of a tree without foliage. This was taken from a painting from the surrealist painter Magritte, and will be used in an upcoming production of a play titled SIX CHARACTERS IN SEARCH OF AN AUTHOR by the Italian author Pirandello. Several instances of this "tree" are to be located on a platform. The backdrop is a cloud filled sky. The trees are seen silhouetted against this backdrop. They will actually be made by stretching a fine black net over a tree shaped frame, then applying opaque shapes to the surface. Fine branches will be caulked into the net. The complex shape will be suspended via fine wires from the pipes above the stage. My solution was to map the scanned image of the sky onto a polygon of type plastic. This works well. I then mapped the trees onto a polygon of type glass ( trees were made black on a white field in Photoshop). The result is that the trees are clearly silhouetted in their several locations between the audience and the backdrop. They cast predictable shadows etc... BUT the rectangular "glass" polygon surface is still present and even though it has an RGB value of 1 1 1 , its "presence" can be seen against the lighted backdrop... and light sources are seen as reflections on its surface. Any details other than black would be seen as transparant colors... fine for stained glass BUT I need them all to be opaque. I propose an "invisible" surface which only has a visual presence where elements of an image are mapped onto it. Perhaps there is an option for rbg = 1 1 1 or rgb = 0 0 0 to be the invisible component. This is dependent on the nature of the image. Perhaps the "paint" or image data could take on the attributes of metal, plastic, trans, mirror, or glass... making it truely versatile. Materials that could be modelled this way are not limited to theatrical objects. Sunscreens made of metal with punched holes and detailed architectural profiles are but a few of the objects that could take advantage of this new "material" type. Maybe I am missing the boat, and this is already a component of this excellent simulator... if so, please point it out. We have successfully used "trans" to represent stained glass and theatrical "scrim"... its the invisible component that's needed. -Rob Date: Mon, 25 Oct 93 10:11:21 PDT From: greg (Gregory J. Ward) To: TCVC@ucs.indiana.edu Subject: Re: new material type Hi Rob, There is a new material type in the next release that may do what you want. It's called "BRTDfunc", and it's been modified since release 2.1 to permit a number of things, including varying diffuse reflectance using a pattern. I am not quite ready to release 2.3, but you may pick up an advanced copy I've made for you from hobbes.lbl.gov in /xfer/private/4rob.tar.Z. The private directory doesn't have read permission, so you won't be able to run "ls", but you should be able to pick up the file nonetheless. The only real changes I will make before the official 2.3 release will be in the documentation, so you probably won't need to recompile when it becomes official in a week or so. You can read the new ray/doc/ray.1 manual for an explanation, but what you want will look something like this: void colorpict tree_pict 7 clip clip clip tree.pic picture.cal pic_u pic_v 0 0 tree_pict BRTDfunc tree_mat 10 0 0 0 1-CrP 1-CgP 1-CbP 0 0 0 . 0 9 1 1 1 .1 .1 .1 0 0 0 The way I have given it here, you should face your tree polygon towards the audience, and the back will appear either transparent or fairly dark (where there are branches). Let me know if this works! -Greg Date: Tue, 26 Oct 93 16:04:48 EST From: TCVC@ucs.indiana.edu Subject: Re: new material type To: greg@hobbes.lbl.gov Thanks for the fast responce! I can certainly get a lot of mileage out of the new material you have created. Your example produced a "cutout" of the image, so I reversed some parameters, making the scanned image opaque and the polygon transparent. void colorpict filigree 9 red green blue robtree2f.pic picture.cal pic_u pic_v -s 7.5 0 0 filigree BRTDfunc net 10 0 0 0 CrP CgP CbP 0 0 0 . 0 9 1 1 1 0 0 0 0 0 0 net polygon tree 0 0 12 0 0 0 6.5 0 0 6.5 13 0 0 13 0 I have yet to explore the properties of the image surface.... wether its like plastic or glass or metal or all. It works excellently for this application though! -Rob ====================================================================== LIGHT SOURCE ACCURACY From: apian@ise.fhg.de Subject: lightlevels close to lightsources To: gjward@lbl.gov (Greg Ward) Date: Thu, 10 Jun 93 17:17:05 MESZ Dear Greg, what follows is a uuencoded cpio demo file concerning light intensities close to lightsources, and a problem. Given a disc (diameter=1), rtrace calculates irradiance for a point above the disc center, varying the distance between center and point. The values are compared with a) inverse square law, valid for distances >> disc diameter b) the analytical solution Problem: The radiance values are too low for distances in the order of the diameter and smaller. For very small distances the values are in fact decreasing. Hm. Any help, ideas or is my testdemo wrong? (could be... could be..) The files: makefile starts the rtrace stuff rquadrat.rad demo geometry eins-durch-r-quadrat gnuplot file If you got gnuplot, simple say "make demo" and tell gnuplot load "eins-durch-r-quadrat" The two curves are 1/r^2 and the integrated solution, points are rtrace output. For -ds 0 all points lie on the 1/r^2 curve, as expected. Setting -ds 2 nicely shows the sampling of the disc, but -ds 0.001 results in a curve too low. TIA*1e8 Peter -- ---------------------------------------------------------------------- Peter Apian-Bennewitz apian@ise.fhg.de Fraunhofer Institute for Solar Energy Systems Tel +49-761-4588-123 (W-Germany) D-7800 Freiburg, Oltmannsstrasse 5, Fax +49-761-4588-100 >>> new phonenumber effective after Friday, 11.6.93 >>> new Freiburg postal code: 79100 , effective 1.7.93 ---------------------------------------------------------------------- Date: Thu, 10 Jun 93 13:12:39 PDT From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: lightlevels close to lightsources Hi Peter, Your results, though disappointing, are not terribly surprising. You see, I don't use an analytical solution for disk sources. In fact, the real problem is that I don't sample them as disks at all. I assume they are approximately square, and thus sample points may fall outside the actual disk at the corners, particularly if -ds is small or -dj is close to 1. Points that fall outside the disk are not counted, so the resulting estimate is low. [But read ahead - there was a mistake in the integral] Intelligent sampling is difficult (ie. expensive) to do in general, so I don't usually do it. It would add a lot to the cost of the calculation because it has to happen everytime a source is examined, which is all the time in Radiance. The only case that is handled properly is parallelograms (incl. rectangles). Thus, if you want a correct result, you'd better start with a rectangular light source. Fortunately, most sources are approximately rectangular, and it is cheap to sample them. Just out of curiousity, why did you decide to test this case? Because you know the analytical solution, or because you have a real need to calculate illumination very close to a disk light source? (BTW, you'll find that spheres are even worse -- I don't substructure them at all in Radiance!) -Greg P.S. Here is a bgraph input file to plot the same stuff. You can type: bgraph comp.plt | x11meta or bgraph comp.plt | psmeta | lpr -P PostScript_printer These programs are distributed with Radiance 2.1. I wrote them ages ago, before GNU came into being. :::::::: comp.plt :::::::: include=function.plt xmin=.001 xmax=8.5 Alabel=Exact Blabel=Rtrace A(x)=PI*2*(1-x/(sqrt(x*x+1))) Bdata=rtrace.out From: apian@ise.fhg.de Subject: Re: lightlevels close to lightsources To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Fri, 11 Jun 93 12:20:25 MESZ Hi Greg, thanks for your answer. > Just out of curiousity, why did you decide to test this case? Because you Yep. the disc integral is a lot easier to do. The first idea was to compare the 1/r^2 approxomation with the real thing, to estimate errors in the measurements I make. Only as second thought came the idea of comparison with rtrace. Probably more of academic interest. greetings Peter -- From: apian@ise.fhg.de Subject: nice values in rpict/rtrace To: gjward@lbl.gov (Greg Ward) Date: Fri, 11 Jun 93 14:51:38 MESZ one more suggestion: The default nice values in rt/Rmakefile are a bit of an xtra. If you user wants lower priority, normal nice is available, but its a bit tricky to get rid of the nice once rpict has set it. This can be nasty in shell scripts and NQS / HP-taskbroker batch processing. IMHO, suggestion: as a default, no nice settings in rt/Rmakefile. (BTW: my integral was a bit wrong, we'll look into this) Peter Date: Fri, 11 Jun 93 08:15:14 PDT From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: nice values in rpict/rtrace You know, Rmakefile is there for you hackers to play with. If no other processes are running on the machine, the nice value has no effect. It's just there so your time-consuming processes that are not interactive, rtrace and rpict, don't slow you down too much when you're trying to do something. It's also there to protect me and my less knowledgeable users from the scorn of system administrators. I'm one of them, so I know how scornful they can be. -G From: apian@ise.fhg.de Subject: rtrace&integral To: gjward@lbl.gov (Greg Ward) Date: Fri, 11 Jun 93 15:05:04 MESZ ok ok ok ok ok ok ok , f(x)=pi*(1-x*x/(x*x+1)) looks a lot better, your disc subdivison is ok. sorry for all the noise. -- Peter Apian-Bennewitz apian@ise.fhg.de Date: Fri, 11 Jun 93 08:24:32 PDT From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: rtrace&integral I'm much relieved to hear it! I can't believe I was willing to just lay down and admit that my calculation was all wrong without even checking your work. Oooo! It makes me so mad! The new plot does look much better, though I notice the value still does drop precipitously at extremely close range. Using a value of .7 for -dj changes that drop to noise above and below the correct value, which is better. The reason it drops is that there is a limit to how far the subdivision will go, so the calculation doesn't go berzerk next to light sources. Thanks for following through on this. -G ==================================================================== BRIGHTNESS MAPPING To: raydist@hobbes.lbl.gov, raylocal@hobbes.lbl.gov Subject: Hello Date: Tue, 15 Jun 93 18:35:47 +0100 From: Kevin Jay marshall Hello again, I am not really a new radiance user, but I am not by any means a first time user. I first have to say that I have found Radiance to be an exceptional collection of programs and enjoy using them when I get a chance. Lately I have been really studing Radinace and trying to understand the meat inside the programs, and have found it a little hard to follow. I think that most of my problem is lack of experience and knowledge in light technique. I am hoping that someone out there is able to give me a hand in what I am learning. I am creating a scene that does not contain much lighting but a couple of direct sources read in from an ies file format. What I am having a problem with is the intensity of light after I reset the exposure with pfilt, the lights are coming out really too bright. I have read the segment that Greg wrote Jim Callahan on Exposure in the Radiance Digest. So I would like to ask a couple of questions also. First, is in the letter in the digest it was stated that the exposure was a more precise way of setting the image color to look more realist. Would it be safe to say that the use of pfilt would be more correct than a gamma correction applied to an image? I only ask, because I am not absolutely 100% sure of the correct answer, but I would guess the answer is yes. So once I know that answer it will be easier to understand the next question. I would like to find the correct exposure that suits the realness of the picture better. Would I be correct if I decided to set the exposure value based on the ratio of my lights luminous efficiency to that of white light? Since the lights I am using all have the same luminous efficiency about 21 lumens/watt as aposed to the 179 lumens/watt white light in Radiance. Could I use the two pass pfilt averaging of the exposure values and then multiply the luminous value by 11% which is what 'my light/white light'? Or on the other had would I just use the 1 pass exposure setting and set the exposure to that of white light 179 lumens/watt. They both look good to me, but the first method is brighter than the other. I am just starting in all this luminous levels and what not so my main question is am I on the right track? Is one of my solutions better than the other or am I absolutely not understanding my problem correctly? Then another question if someone has time. Since I am new to the lumious/illuminance Engineering, what is the advantage to calculating Radiance values as apposed to calculating luminous values? I spent the other night attempting to understand why and what I could understand is that Radiance is based upon amount of light emitted from a Black body and its measurments through, and from matterials is wavelength independent. I also looked up the definition of the candela which is based upon radiance measurements that are converted to luminous measurements by a max luminous efficiency of 683 lumens/watt is at a wavelength of 545 nm. I realise I am probably asking questions that will probably better answered in a course, but I figure it never hurts to ask. Well thanks for the time. -Sincerly, Kevin Date: Tue, 15 Jun 93 13:01:44 PDT From: greg (Gregory J. Ward) To: kevin@sigma.hella.de Subject: exposure, etc. Hi Kevin, Luminous efficacy doesn't really have much to do with luminance or the perception of brightness. Luminous efficacy tells how efficiently a lamp converts electrical energy into visible light. You can still use a greater quantity of inefficient fixtures to outshine the more efficient ones, but as a researcher in energy conservation it is my job to criticize you if you do. If I understand you correctly, your basic question is, 'how do I display my image so as to reproduce the same viewer response as the actual environment?' I believe this question is still open to debate, but at least it is starting to attract some of the research attention from the computer graphics community that it deserves. One answer has come from Jack Tumblin and Holly Rushmeier. Looking back to subject studies conducted in the early 1960's, they devised a formula that maps world luminance values to display luminances in a way designed to evoke the same viewer response as the actual scene (would). So far, their paper has appeared only as a technical report (number GIT-GVU-91-13 from the Georgia Institute of Technology, College of Computing, Atlanta, GA 30332-0280), but an abbreviated version will soon appear in IEEE Computer Graphics and Applications [Fall 1993 issue]. An implementation of the Tumblin-Rushmeier display mapping is appended at the end of this message. In the meantime, Chiu, Herf, Shirley, Swamy, Wang and Zimmerman have done some interesting work on mapping brightness adaptively over an image in their 1993 Graphics Interface paper. They refer also to Tumblin and Rushmeier's work, which seems to becoming widely accepted as the standard even before it's been properly published. The approach I've been working on lately is a simple linear scalefactor, which can be applied with pfilt, or dynamically with a new version of ximage. The scalefactor is based on some early 1970's subject studies by Blackwell, and it attempts to reproduce visible contrast on the display to correspond to the visible contrast in the environment being simulated. For a standard color monitor, the formula boils down to: exposure = .882/(1.219 + L^0.4)^2.5 where the world adaptation luminance, L, is expressed in candelas/m2. To find this value, it's easiest to use the 'l' command of ximage after selecting the area at which the viewer is supposedly looking. Let's say that ximage reports back a value of 25. You could then apply pfilt as follows: pfilt -1 -e `ev '.882/(1.219+25^.4)^2.5'` orig.pic > adjust.pic Be sure to use the -1 option, and this must be the first exposure-adjusting application of pfilt to your picture, otherwise you will not be starting from the original values and the exposure will be off. Like I said before, the next version of ximage will have a new command, '@', that performs this adjustment interactively. The result is a dark display if you're starting from a dark environment, and a normal display if the environment has good visibility. Again, do not confuse luminous efficacy with brightness perception. The efficacies given in common/color.h are for the visible spectrum only, and are different from full-spectrum efficacies reported for the sun and electric light sources. The only reason for using spectral radiance (the physical unit) instead of luminance is to have the capability of color. The conversion factor between the two of 179 lumens/watt corresponds to uniform white light over the visible spectrum. It does not include lamp losses or infrared and ultraviolet radiation, as do other luminous efficacies, and is purely for conversion purposes. I could pick almost any value I like, as long as I use it consistently to go back and forth between radiant and luminous units. -Greg ---------------------------------------------- { BEGIN tumblin.cal } { Mapping of Luminance to Brightness for CRT display. Hand this file to pcomb(1) with the -f option. The picture file should have been run previously through the automatic exposure procedure of pfilt(1), and pcomb should also be given -o option. Like so: pfilt input.pic | pcomb -f tumblin.cal -o - > output.pic If you are using pcomb from Radiance 1.4, you will have run without pfilt and set the AL constant manually. If you are using a pcomb version before 1.4, you will have to do this plus change all the colons ':' to equals '=' and wait a lot longer for your results. Formulas adapted from Stevens by Tumblin and Rushmeier. 29 May 1993 } PI : 3.14159265358979323846; { Hmm, looks familiar... } LAMBERT : 1e4/PI/179; { Number of watts/sr/m2 in a Lambert } DL : .027; { Maximum display luminance (Lamberts) } AL : .5/le(1)*10^.84/LAMBERT; { Adaptation luminance (from exposure) } sq(x) : x*x; aa(v) : .4*log10(v) + 2.92; bb(v) : -.4*sq(log10(v)) + -2.584*log10(v) + 2.0208; power : aa(AL)/aa(DL); mult = li(1)^(power-1) * ( LAMBERT^-power/DL * 10^((bb(AL)-bb(DL))/aa(DL)) ); ro = mult*ri(1); go = mult*gi(1); bo = mult*bi(1); { END tumblin.cal } [And here is a response from Charles Ehrlich...] Date: Wed, 31 Dec 69 16:00:00 PST From: SMTPMcs%DBUMCS02%Servers[CEhrlich.QmsMrMcs%smtpmrmcs]@cts27.cs.pge.com Subject: Kevin's email To: greg@hobbes.lbl.gov, greg@hobbes.lbl.gov Kevin, You have stumbled on (or rammed into) one of the most sticky issues related to renderings and visualization. I catagorize it as "How does the eye measure light?" I would like to share with you my understanding of the situation. Radiance is based on the idea of Scientific Visualization...the very same kind of algolrhythms and techniques used to take measurements of any other kind of real-world physical data like the density of earth from sonic echos or cloud coverage from radar imaging, are used to represent visual data. The only difference is that Radiance happens to be best at measuring a physical quantity that we all are very familiar with...light. As a proof of concept, display a radiance image on screen with ximage, position the cursor somewhere in the image, and press the "i" key. Notice the bands of color appear. This is a "False-Color" image, just another way of displaying luminance values different than the way our eye "measures" luminance values. 2. Very little is known about the way our eyes actually measures light. Radiance doesn't try to figure this out to the Nth degree. It uses an approximation technique that is similar to the technique used by the color photographic process (none in particular.) When you say that your image appears too bright, this is because, indeed, that is the way a camera would represent the same scene on a photograph. Really! I've proven it to myself. Alternatives to the linear, photographic mapping technique exist, but are not refined (look in the ray/src/cal/cal directory for tumblin.cal). More work needs to be done that involves emprical studies of human response. In my opinion, a gamma correction comes closer to the way the human eye preceives light than a linear mapping. I've asked greg to build into ximage a way of displaying images with gamma correction, and to build into pfilt a reversible gamma correction facility (so that the original, linear values of the image file can be retreived.) He does not want to do this because indeed, the eye does not do "gamma correction." I believe that once it is figured out what the eye does, he'd be happy to implement that algolrhythm. For the time being, I use Adobe Photoshop to make the image look more realistic, if I'm less concerned with accuracy or design. 2.a. Radiance image files store many more magnitudes of luminance values than a photograph or a computer monitor can reproduce. How the eye processes these out-of-range luminance values is still largely unknown. For the lighting designer, or concerned informed architect using Radiance, what it should tell you when you've created a scene that you judge to have "light sources that are too bright" is that you need better light sources...ones with a better cutoff angle and/or higher specularity grating and/or VDT grade luminaires. You could also try an indirect lighting solution that makes the ceiling surrounding the light fixtures bright so that the CONTRAST RATIO between the light fixtures and the wall is not so great, effectively reducing the perceived brightness of the light fixtures. You could also try using a brighter colored carpet so that more light gets reflected onto the ceiling if an indirect (uplighting) solution does not work for you. Note: changing the brightness of your carpet from 5% to 10% doubles the amount of reflected light! I haven't seen the particulars of your scene and I don't know how you're calculating your images, but make sure you're doing an indirect (-ab>0) calculation to initially set the -av value if you're using direct fixtures. If you're using indirect fixtures, then you're whole calculation should be an indirect one (-ab>=1). To set the -av value, set up a .rif file for rad to calculate an interactive (rview) image with ambient bounces. Pick a point in the scene once the image is fairly well refined that you think represents the average ambient value in the shadows of your scene. Rview will report something like: ray hit such_and_such_surface surface_type surface_mat at point (x.xxxxx, y.yyyyyy, z.zzzzzzz) with value (r.rrrrrr, g.gggggg, b.bbbbbb). This last value is the luminance at that point. Write it down and try a few other places in the scene. Unless you want colored shadows, average each value with the function (.3*Red+.59*Green+.11*Blue), which is the function for the eye's (and Radiance's) average brightness given the three primary colors. Use the resulting value in subsequent calculations of rview or rpict for all three coordinates of -av. I believe that, the difference between the luminous efficacy of your fixtures has little to do with pfilt exposure settings. It has everything to do with the description of your fixtures from within radiance. I assume that you're using some kind of low pressure sodium fixture that has less white in it? I believe that you should be adjusting you're fixture's intensity such that it matches that of white light using the same function above (verify this with Greg). Very little work has been done (to my knowledge) with different colored light sources and what to do with them once an image has been calculated (how to average them so that the image "looks" accurate.) Again, this depends upon how the eye funcitons and I suggest using Adobe Photoshop if you must use colored light sources. For the most part, unless you're using multiple types of light source colors (low pressure sodium and incandescent) you should just assume that the lights are white. If you're talking about the difference between a good tri-phosphor fluorescent light and daylight, I'd say you're wasting your time, unless the difference between these sources is what is important to you. Most people find the orange hue of daylight film used in incandescently lighted places to be anoying. The same goes for the bluish tint one finds with tungsten film used in daylighted spaces. Again, we do not know exactly how the eye responds to light, especially when it involves great contrasts, and when it involves multiple colors of light. Even if we did, Radiance uses only three samples of the spectrum...which is not enough to accurately describe what is going to happen to the light from a light source with all sorts of spikes and valleys in its spectrum when it gets mixed together with other spectrally diverse sources in a scene. And furthermore, I know of n commercially available computer monitor or photographic film with more than three colors of phosphors or dyes. But, who knows, maybe there's a Radiance-16 out there with 16 samples of the spectrum. (don't hold you're breath.) But then again, even HDTV only has three colors of phosphors. But, you say, the eye only has three "colors" of receptors. I say that I've never seen a computer monitor that can display "fluourescent," day-glow orange, purple or green. Why? -Chas chas@hobbes.lbl.gov Date: Tue, 15 Jun 93 16:35:46 PDT From: greg (Gregory J. Ward) To: chas, kevin@sigma.hella.de Subject: remarks on Chas' remarks Hi Kevin (and Chas), I agree with everything Chas had to say (as far as I can remember!). I did forget to mention gamma correction. Gamma correction is called that because most CRT-based display systems exhibit a natural response curve that to reasonable approximation follows a power law, ie: display_luminance = maximum_luminance * (pixel_value/255)^gamma The gamma correction done by ximage and the other converters is designed to compensate for the built in response function of the display monitor or output device, but it can be used to increase or decrease contrast if desired as well. For example, your monitor may have a gamma value of 2.6 (typical). If you want to display an image with artificially increased contrast (eg. for more vibrant colors), you can intentionally underrate the gamma with ximage, (eg. ximage -g 1.5). Similarly, you can artificially decrease contrast by overrating the gamma (eg. ximage -g 3.2). To find out what the actual gamma of your monitor is, you can look at the image supplied in ray/lib/lib/gamma.pic like so: ximage -g 1 -b gamma.pic Set your monitor to normal brightness and contrast, then match the display on the left with the grey scale on the right. The corresponding number is the correct gamma value for this monitor under these settings. Rather than setting the gamma with the -g option in ximage all the time, you may then define the environment variable GAMMA to this value, ie: # In .login file: setenv GAMMA 2.6 : Or, in .profile: GAMMA=2.6 export GAMMA This has the added advantage of setting the gamma value in rview, which doesn't have a -g option. If you have an SGI, I should mention that the way they handle gamma correction is a bit screwy. There is a system-set gamma value that neither indicates the actual gamma of the graphics display nor does it completely correct for the natural gamma of the monitor, but leaves the combined system response somewhere between linear and the natural curve in most cases. For example, the system gamma is usually set to 1.7. What this means is that if the monitor's natural response was around 1.7, then the graphics system has fully compensated for this and the response function is now linear. In fact, the monitor's gamma is larger than this, and the combined response ends up being around gamma=1.2, which is to what you should set the GAMMA environment variable. Hope this is more help than confusion... -Greg To: greg@hobbes.lbl.gov, chas@hobbes.lbl.gov Subject: Hello Date: Thu, 24 Jun 93 18:09:48 +0100 From: Kevin Jay marshall Greg and Chas, I wanted to write again and thankyou for all your help and to fill you in on all the results of the test that we made the other day. What we first did was to measure actual headlights on a car from one of the employee's here, then what we did was to take a picture of the car at night of what the car's headlights on the road actually look like. Then we took that data and simulated the headlights using Radiance. That was when I had the problem of how to use pfilt correctly, which I still cannot do. But thanks to the two of you and some experimenting of my own I understand why it is currently impossible to get the exact picture I am looking for to be the exact picture that my boss is looking for and so on. I also have come to respect the job that is trying to be accomplished by the pfilt program. But to continue on. So we created some pictures and then yesterday I had the opportunity to accompany my boss to the light channel here at Hella to see the actual headlights that were created from all this theoretical data. The results were excellent. My picture looked exactly like the real lights shown on the road, except for the brightness. I think the part my boss likes best is the ability to compute a rough picture, which is enough to get a clear idea of the distribution on the street, in about 5 to 10 minutes. Well anyway I thought you might be interested in the results. -Kevin Date: Thu, 24 Jun 93 09:35:09 PDT From: greg (Gregory J. Ward) To: kevin@candela.hella.de Subject: Re: Hello Cc: chas Hi Kevin, I'm glad you got it to (sort of) work. Like we told you, more sophisticated brightness mappings are possible with pcomb, but you have to know what you are doing, I think, to get good results. -G Date: Wed, 31 Dec 69 16:00:00 PST From: SMTPMcs%DBUMCS02%Servers[CEhrlich.QmsMrMcs%smtpmrmcs]@cts27.cs.pge.com Subject: Hella headlight study To: greg@hobbes.lbl.gov Kevin, I'm very encouraged to hear that your simulation worked!! How much time have you spent learning Radiance all together? Regarding the fact that the brightness of the image (presumably around the location of the headlights or where the beam hits the road) did not match the physical simulation...did you take any luminance measurements and correlate those with the luminance predicted by Radiance (using the "L" command within ximage?) I think that doing that might just convince your boss what Radiance is all about...namely that the pretty picture you get is just a by-product of the time-consuming, physically-based calculations going on behind the scenes. -Chas ======================================================================= PARALLEL RADIANCE AND ANIMATIONS From: matgso@gsusgi1.gsu.edu (G. Scott Owen) Subject: Re: parallel radiance To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Mon, 21 Jun 1993 13:44:35 -0500 (EDT) Greg, Has the parallel version of radiance been released yet? We were thinking of adapting Radiance to the PVM (Parallel Virtual Machine- see article in latest IEEE Computer)) environment. What do you think of this idea? Scott Owen Date: Tue, 22 Jun 93 15:58:01 PDT From: greg (Gregory J. Ward) To: matgso@gsusgi1.gsu.edu Subject: Re: parallel radiance Hi Scott, Although I still have not received permission to distribute the next release, I have made a beta copy of the software for you to test in the /xfer directory on the anonymous ftp account on hobbes.lbl.gov, in the file "4owen.tar.Z". Check out the manual page for "rpiece" in ray/doc/man1. Usage is a little awkward, so don't be shy with your questions. I picked up a copy of the article from IEEE Computer, but haven't taken the time to read it through carefully, yet. It seems like a really good approach. The approach I have taken is more simple-minded, but shares the advantage of running in a heterogeneous environment. In fact, all I require is NFS and the lock manager to coordinate multiple processes. The host processors may be on one machine, on many machines distributed over the network, or any combination thereof. Each processor must either have exclusive access to enough memory to store the entire scene description, or must share memory with other processors that do. If you want to do animations, you can run separate frames on the separate machines, rather than dividing up each frame, which is technically more difficult. At the end of this letter I have put a couple of files that might make your animation work easier. Using the "view" command within rview, you may write out a list of key frames, one after another, in a walk-through view file, like so: : v walk.vf -t N Where "N" is replaced by the number of seconds that you assumed has passed since the previous keyframe (for the first keyframe, just enter 0). After you have all the keyframes you want in the view file, simply run the mkspline command on that file, writing the result to a .cal file: % mkspline walk.vf > walk.cal Then, you may use rcalc to compute any frame desired from your animation: % rcalc -n -f walk.cal -f spline.cal -e 't=10.3' -o view.fmt >10.3.vf Or, generate the entire sequence: % cnt 1000 | rcalc -f walk.cal -f spline.cal -e 't=Ttot/999*$1' \ -o view.fmt | rpict -S 1 -vf start.vp -x 1000 -y 1000 \ [more options] -o frame%03d.pic {octree} & You should also be aware of the pinterp program, which can greatly speed up renderings of walk-through animations. (Ie. animations where no objects are in motion.) Since I have never gotten around to making an animation rendering program, I would strongly recommend that you show me your scripts before running any long renderings on your machines. I could potentially save you a lot of compute time with a little appropriate advice. -Greg ----------------- BEGIN "spline.cal" ------------------ { Calculation of view parameters for walk-throughs. Uses Catmull-Rolm spline. 09Feb90 Greg Ward Define: T(i) - time between keyframe i and i-1 Input: t - time Output: s(f) - spline value for f at t where f(i) is value at T(i) } s(f) = hermite(f(below), f(above), (f(above)-f(below2))/2, (f(above2)-f(below))/2, tfrac); tfrac = (t-sum(T,below))/T(above); Ttot = sum(T,T(0)); below = above-1; above = max(upper(0,1),2); below2 = max(below-1,1); above2 = min(above+1,T(0)); upper(s,i) = if(or(i-T(0)+.5,s+T(i)-t), i, upper(s+T(i),i+1)); sum(f,n) = if(n-.5, f(n)+sum(f,n-1), 0); or(a,b) = if(a, a, b); min(a,b) = if(a-b, b, a); max(a,b) = if(a-b, a, b); hermite(p0,p1,r0,r1,t) = p0 * ((2*t-3)*t*t+1) + p1 * (-2*t+3)*t*t + r0 * (((t-2)*t+1)*t) + r1 * ((t-1)*t*t); --------------- END "spline.cal" -------------------- --------------- BEGIN "mkspline" ---------------------- #!/bin/csh -f # # Make a .cal file for use with spline.cal from a set of keyframes # if ( $#argv != 1 ) then echo Usage: $0 viewfile exit 1 endif cat <<_EOF_ { Keyframe file created by $0 from view file "$1" `date` } _EOF_ foreach i ( Px Py Pz Dx Dy Dz Ux Uy Uz H V T ) echo "$i(i) = select(i," rcalc -i 'rview -vtv -vp ${Px} ${Py} ${Pz} -vd ${Dx} ${Dy} ${Dz} -vu ${Ux} ${Uy} ${Uz} -vh ${H} -vv ${V} -vs 0 -vl 0 -t ${T}' \ -o ' ${'$i'},' $1 | sed '$s/,$/);/' end --------------- END "mkspline" -------------------------- --------------- BEGIN "view.fmt" ------------------------ rview -vtv -vp ${Px} ${Py} ${Pz} -vd ${Dx} ${Dy} ${Dz} -vu ${Ux} ${Uy} ${Uz} -vh ${H} -vv ${V} --------------- END "view.fmt" ------------------------ ================================================================= AMIGA PORT Date: Tue, 13 Jul 93 13:09:05 MED From: bojsen@id.dth.dk (Per Bojsen) To: GJWard@lbl.gov Subject: Reorganized hobbes.lbl.gov:/pub/ports/amiga Hi Greg, I've reorganized the /pub/ports/amiga directory on hobbes.lbl.gov. I deleted the old files there and put some new files up instead. I broke the big archive up into a few pieces and replaced the binaries with new versions. I hope it's ok with you! -- Per Bojsen The Design Automation Group Email: bojsen@ithil.id.dth.dk MoDAG Technical University of Denmark bojsen@id.dth.dk ======================================================================= PVALUE Date: Sat, 17 Jul 93 22:42:00 EDT From: "Yi Han" To: greg@hobbes.lbl.gov Subject: question Hi Greg, I have installed your RADIANCE program. I just have a quick question for you. Is there a way to find out luminance and radiance on all pixels of the picture? It is like the function of press and "l" keys in ximage. But I don't want to do it pixel by pixel. Thank you very much for your help. Yi Date: Mon, 19 Jul 93 09:28:34 PDT From: greg (Gregory J. Ward) To: yihan@yellow.Princeton.EDU Subject: Re: question Yes, you can use the "pvalue" program to print out the color or gray-level radiance values. For the spectral radiance values, pvalue by itself or with the -d option (if you don't want the pixel positions). For the gray-level radiance values use the -b option (with or without -d). For the luminance values, take the radiance values and multiply them by 179. You can do this with rcalc, like so: % pvalue -h -H -d -b picture | rcalc -e '$1=$1*179' > luminance.dat -Greg ============================================================== BACKGROUND COLOR Date: Mon, 18 Oct 93 20:28:48 -0400 From: David Jones To: greg@hobbes.lbl.gov Subject: setting background to something other than black ? Hi Greg, I am putting together a rendering of a geometric model for a presentation and I want the background to be a dark royal blue, instead of black. That is, whenever the traced ray goes off to infinity without intersecting a surface, I want it dark blue. I know this can be done in an elegant way, but I cannot figure it out. Can you tell me? of course the slides need to be ready by tomorrow .... dj Date: Tue, 19 Oct 93 09:33:06 PDT From: greg (Gregory J. Ward) To: djones@Lightning.McRCIM.McGill.EDU Subject: Re: setting background to something other than black ? Just use a glow source, like so: void glow royal_blue 0 0 4 .01 .01 .3 0 royal_blue source background 0 0 4 0 0 1 360 -Greg ================================================================== DEPTH OF FIELD Date: Fri, 29 Oct 93 12:50:40 PDT From: djones@mellow.berkeley.edu (David G. Jones) To: gjward@lbl.gov Subject: optics and radiance Hi Greg, I am unsure how accurate RADIANCE can model certain optical effects. For example, what about blur ? Can I "build" a simple "camera" inside RADIANCE and "view" the image plane? I might build a scene and build a camera in this scene, with a "thin lens" from "glass" and place an iris of a certain diameter in the middle and view the "image plane" behind the lens ? Would this work? Would it give be the appropriate blur for the iris diameter? What if I have two "irises" displaced from the optical axis? This is in fact what I really want to model. Any hope in heck of this working in RADIANCE? dj Date: Fri, 29 Oct 93 14:09:14 PDT From: greg (Gregory J. Ward) To: djones@mellow.berkeley.edu Subject: Re: optics and radiance Hi David, Radiance does not directly simulate "depth of field," but it is possible to approximate this effect through multiple renderings with slightly different view parameters by summing the resulting pictures. All you do is pick a number of good sample viewpoints (-vp) on your aperature (or iris, as the case may be), then set the -vs and -vl view options like so: vs = s / (2*d*tan(vh/2)) vl = l / (2*d*tan(vv/2)) where: d = focal distance (from lens to object in focus) s = sample shift from center of aperature (in same units as d) l = sample lift from center of aperature vh = horizontal view angle (-vh option) vv = vertical view angle Then, sum the results together with pcomb, applying the appropriate scalefactor to get an average image, eg: pcomb -s .2 samp1.pic -s .2 samp2.pic -s .2 samp3.pic \ -s .2 samp4.pic -s .2 samp5.pic > sum.pic Let me know if this works or you need further clarification. -Greg P.S. You can start with lower resolution images, since this averaging process should be an adequate substitute for reducing (anti-aliasing) with pfilt. ================================================================== A COMPANY CALLED RADIANCE Date: Tue, 2 Nov 93 16:55:47 PST From: ravi@kaleida.com (Ravi Raj) To: GJWard@lbl.gov Subject: A company called Radiance Software Greg, I am a partner in a company called Radiance Software. Radiance was formed as a partnership in January of this year and the company develops a low-cost modeling and animation system called Movieola. The product at the moment runs only on SGIs but is expected to be ported to Sun, HP and IBM by middle of next year. Version 1.0 of the product is expected to ship in February next year. I've heard a lot about your ray tracing product called Radiance. I haven't really used it yet but am planning on downloading your code and checking it out on an SGI. Believe it or not, when we formed a partnership called Radiance Software, it never occurred to us that there might be a rendering product called Radiance. We are planning on incorporating Radiance Software soon. Would you or Lawrence Berkeley Labs mind your product name being used as a company name by a company that develops a modeling/animation system? We'd really appreciate hearing from you one way or the other. Please reply via e-mail or call Lee Seiler at (510) 848-7621. Many Thanks! Ravi Date: Wed, 3 Nov 93 14:55:42 PST From: greg (Gregory J. Ward) To: ravi@kaleida.com Subject: Re: A company called Radiance Software Hi Ravi, Well, well. You know there is also a rendering program called "Radiant" and some other product called "Radiance." Neither one is very popular as far as I know, but the crowding of names in such a short lexicographical space is a bit confusing. Radiance (our software) has been around for a good 5 or 6 years, so having a company by the same name could confuse a lot of people. I appreciate your asking me about it, though. Since rendering is in some sense complimentary to modeling and animation, perhaps your company would like to take advantage of this happenstance by becoming a distributor of our software? We are in the process of setting up a licensing arrangement to make this possible, and the fee should be quite reasonable. At any rate, I would appreciate it if you do check out the Radiance renderer and tell me what you think. (You might want to wait a few days for me to prepare the next release, version 2.3.) -Greg ~s Radiance Digest v2n6 Dear Radiance User, I've been accumulating mail messages for about half a year, and it's time to send them out before the digest hits the treacherous 100K mark. As usual, the mail is broken into indexed catagories for more convenient browsing. These are the topics covered in this issue: BRIGHTNESS MAPPING - Mapping luminance to display values LIGHTING CALCULATIONS - Lighting and daylighting questions SPECULAR APPROXIMATION - Fresnel approximation changes MEMORY USAGE - Memory used by rpict and oconv SGI GAMMA - Correct gamma setting on Silicon Graphics PROGRAM NAMING - Radiance program name collisions RVIEW SURFACE NORMALS - Learning about surface normals in rview WATER - Modeling water STAR FILTER - Using pfilt star filter in animations GLOW - Details on glow type definition TMESH2RAD - New triangle mesh converter LUMINAIRE MODELING - Modeling unusual luminaire geometry GENSKY - Getting gensky to mimic a measured sky COMPILE PROBLEMS - ANSI-C compilation trauma GLASS BRICKS - Modeling glass bricks AMBIENT VALUES - Setting the ambient value (-av parameter) LOCK MANAGER - Problem with rpiece/rpict on some machines SPECKLE - Source of image speckle OBJVIEW - Using objview to look at a single object CSG - Constructive Solid Geometry (not) RAD PROBLEMS - Strange behavior with rad program If you wish to be taken off the Radiance mailing list, please send mail to "radiance-request@hobbes.lbl.gov". -Greg ========================================================================== BRIGHTNESS MAPPING [The following refers back to some material under the same heading in Radiance digest v2n5, part 2.] Date: Wed, 17 Nov 1993 12:48:19 GMT From: lilley@v5.cgu.mcc.ac.uk (Chris Lilley, Computer Graphics Unit) To: GJWard@lbl.gov Subject: Re: Radiance Digest, v2n5, Part 2 of 2 Kevin said: > I also looked up the >definition of the candela which is based upon radiance measurements that are >converted to luminous measurements by a max luminous efficiency of >683 lumens/watt is at a wavelength of 545 nm. You said: >Luminous efficacy doesn't really have much to do with luminance or the >perception of brightness. Luminous efficacy tells how efficiently a >lamp converts electrical energy into visible light. I think there may be a misunderstanding here. You are talking about the luminous efficiency of a light source - light output for power input. Kevin is actually talking here about the luminous efficiency of the eye which clearly does relate to the perception of brightness. He is firstly referring to luminous efficacy, which is where the 683 comes from. Indeed you later talk about that yourself. And to get luminous efficacy you take the quoitent of luminous flux by radiant flux. Luminous flux in general assumes photopic vision, which is where the 545nm comes from (although Wyszecki & Stiles give it as 555nm). And the photopic luminous efficiency is also involved - brightness sensation for light input wrt wavelength, for the standard photometric observer. Do you agree with this? You also said: >For a standard color monitor, the formula boils down to: > exposure = .882/(1.219 + L^0.4)^2.5 Could you explain what you mean by a standard colouyr monitor as there are a number of standards, mainly related to video broadcast and encoding rather than computer graphics, which does not use standard monitors but rather what each workstation manufacturer happens to supply. The power of 2.5 looks like a form of gamma correction. (and the 0.4, its inverse) Where do the .882 and the 1.219 come from though? Charles Ehrlich said: >Very little is known about the way our eyes actually measures light I suspect the CIE would take issue with that statement ;-) Regards, Chris Lilley ---------------------------------------------------------------------------- Technical Author, ITTI Computer Graphics and Visualisation Training Project Computer Graphics Unit, Manchester Computing Centre, Oxford Road, Manchester, UK. M13 9PL Internet: C.C.Lilley@mcc.ac.uk Voice: +44 61 275 6045 Fax: +44 61 275 6040 Janet: C.C.Lilley@uk.ac.mcc "The word 'formal' in this context is a euphemism for 'useless'." H.M. Schey, 'Div, Grad, Curl and all that'. Norton: New York 1973 ---------------------------------------------------------------------------- Date: Wed, 17 Nov 93 09:50:32 PST From: greg (Gregory J. Ward) To: lilley@v5.cgu.mcc.ac.uk Subject: Re: Radiance Digest, v2n5, Part 2 of 2 Hi Chris, OK, I apologize for my glib answer to Kevin. The statement I made about luminous efficacy not having much to do with luminance or brightness is clearly wrong -- I was just trying to get him on a different track. There are many ways to interpret what Kevin wrote, but it seemed to me like he was confusing the efficacies of the light sources with the display of the rendered image, and the two are not really related. An incandescent light has an efficacy around 14 lumens/watt because most of the energy is given off as heat. The 179 lumens/watt value I use is the theoretical maximum efficacy of uniform white light over the visible spectrum. (You may contest this number, but it's right around there and it doesn't much matter what value is used as long as it's used consistently -- see first section in part 1 RD v2n5.) I do agree with your statements. (I believe 555 nm is the defined peak of v(lambda).) The formula I picked out works for "average" workstation monitors. Mine is from SGI and has a Sony XBR tube with Mitsubishi electronics (20" model). It is derived in a graphics gem I wrote to be published in Gems IV by Paul Heckbert, Academic Press. I can mail you a PostScript version of the text and formulas from it if you like, but the pictures must be carefully reproduced so I can't offer those as readily. The short answer to your questions about the constants' origin is that they came from Blackwell's fit to his subject data, the same way everything else "known" about the human visual system is derived. -Greg ========================================================================== LIGHTING CALCULATIONS From greg Fri Nov 19 08:48:05 1993 Return-Path: Date: Fri, 19 Nov 93 08:47:39 PST From: greg (Gregory J. Ward) To: Francis_Rubinstein@macmail.lbl.gov, RGMARC@engri.psu.edu Subject: Re: Daylighting Status: RO Dear Rick, Francis forwarded your questions to me, and I will do my best to answer. > 1. To determine the illuminance at a photocell location, with an appropriate > view function. You should probably use rtrace with the -I option -- this returns irradiance at a given point and surface normal direction in your scene. As for the other parameters, it really depends on your scene. Can you describe it for me? (Esp. with regard to how light is getting to the workplane.) To convert the watts/m^2 you get out of rtrace for red green and blue to illuminance in lux, pipe the output to "rcalc -e '$1=54*$1+106*$2+20*$3'". > 2. To determine the distribution of illuminance across the workplane. (Is it > possible to do this directly in Radiance?) Yes, you can do this by sending an array of points to rtrace then converting the result, like so: % cnt 10 5 | rcalc -e '$1=($1+.5)*20/10;$2=($2+.5)*10/5;$3=2.5' \ -e '$3=0;$4=0;$5=1' | rtrace -h -I -ab 1 -av .2 .2 .2 myscene.oct \ | rcalc -e '$1=54*$1+106*$2+20*$3' > workplane.ill To understand this command, you should read the manual pages on cnt, rcalc and rtrace. Good luck! -Greg Date: 19 Nov 1993 11:57:49 -0400 (EDT) From: "Richard G. Mistrick" Subject: Re: Daylighting To: greg@hobbes.lbl.gov Greg: Thanks for the advice. With respect to the photocell condition, what we plan to study (and Francis may provide input as a member of a student's M.S. committee) is the preferred photocell response (considering shielding, etc.) that is most appropriate for daylight dimming systems under sidelighting conditions. We will also look at different lighting systems - direct, indirect, direct/indirect. So, what we want to do is mimic a detector view field with the detector either on the ceiling or on the wall. I suppose that I can simply put in a reflectance function that would approximate the detector field of view. One addition question that I have is what is the correlation between RGB and reflectance for diffuse surfaces? Thanks for your help, and as I write this I see that I have a new message from you on a new version. --Rick Mistrick Date: Fri, 19 Nov 93 10:06:50 PST From: greg (Gregory J. Ward) To: RGMARC@ENGR.PSU.EDU Subject: Re: Daylighting Hi Rick, There are two ways to get what you're after. The most direct is to model the actual photocell geometry and use rtrace -I at the appropriate position on the photosensor. The other way is to generate a hemispherical fisheye view of the scene and apply some masking to take out the part of the view appropriate for different (hypothetical) shielding devices. The options for such a view (to rview or rpict) are -vth -vv 180 -vh 180. Just adding up (averaging) all the non-black pixels in such a picture gives you the irradiance. For your second question, the following is a diffuse surface with 75% reflectance (gray): void plastic gray75 0 0 5 .75 .75 .75 0 0 If there is a specular component (eg. for glossy paint), the actual reflectance of the following: void plastic shiny_gray70 0 0 5 .75 .75 .75 .04 .05 would be .75*(1-.04) + .04, or 76%. The completely general formula to get reflectance from the parameters of plastic is: (.263*R+.655*G+.082*B)*(1-S) + S Where R, G and B are the first, second and third parameters, and S is the fourth parameter. For metal, the formla is simpler, just (.263*R+.655*G+.082*B), since the specular component is also modified by the material color. -Greg Date: 02 Dec 1993 14:40:49 -0400 (EDT) From: "Richard G. Mistrick" Subject: Radiance questions To: gjward@lbl.gov Greg: We are slowly learning more about Radiance, but I still have some basic questions that we have not been able to answer. 1. In commands such as source and glow, what are the R G B values? 2. For assigning reflectances, it would be nice to have a command in which you enter the R G B values and the output is the color on the screen and its reflectance. Is there such a command? 3. We have followed the daylighting example in the tutorial and are attempting to get an IES standard clear sky distribution. One thing that we notice is that as we change the turbidity and use RTRACE to determine irradiance, we get different irradiances inside the building but not outside. How do we get the accurate values for the daylight available on an exterior horizontal or vertical plane? Any suggestions that you have would be most appreciated. --Rick Date: Thu, 2 Dec 93 12:56:06 PST From: greg (Gregory J. Ward) To: RGMARC@engri.psu.edu Subject: Re: Radiance questions Hi Rick, To answer your questions: 1. The RGB values for glow, light, spotlight are all spectral radiance values in watts/sr/m^2. To compute appropriate values use the interactive program "lampcolor". 2. I agree it would be nice to have such a tool. I wrote something like this a long time ago for X10, but never ported it to X11 or anything else. Maybe it's time to do it, if I can just find the time. 3. To compute irradiance outdoors, you must use -ab 1 at least with rtrace. This value is available directly from gensky, though, in a comment in the output. When it says "Ground ambient level:" you can just take that number and multiply it by pi and you'll have the diffuse irradiance at the groundplane. The global irradiance needs the solar component as well, which may be computed from the solar radiance and altitude. For a vertical plane, you'll just have to rely on rtrace. -Greg Date: Wed, 19 Jan 1994 17:49:46 +0000 (GMT) From: "Dr R.C. Everett" Subject: RADIANCE QUESTIONS To: GJWARD@lbl.gov Dear Mr.Ward, Here at the Martin Centre in Cambridge University, we've just started using RADIANCE on a number of Silicon Graphics Indigo machines. The department is involved in various form of 3-D visual computer programming, including walk-throughs using a space-ball. We're finding RADIANCE very interesting and the new RSHOW routine is excellent and helps the slightly ponderous task of data input. I am mainly concerned on two European Community funded daylighting research projects, mainly so far about a Greek hospital. The ability of RADIANCE to model different sky conditions makes it a useful alternative to testing cardboard models in an artificial sky chamber, (especially when our sky chamber doesn't quite do Greek conditions). I have been trying to marry up daylight factor contours for a simple room as produced with the DAYFACT routine with measurements on our cardboard model rooms and mundane simple theory. I'm not getting good agreements, so I'm probably not pressing the right buttons and I haven't read the instructions properly. Since the DAYFACT routine was sponsored by LESO in Switzerland, is there any formal documentation of the theory? If so how can I get a copy? I am also puzzled by a number of things: Do windows have to have two sides to get the proper transmission coefficients? The illum process appears to replace a diffuse window light source with an equivalent point source. What are the geometric maths of this? What is the format of the illum.dat files? Presumeably they all add up to the total light passing through the window. Can I use this file to cross-check this with my own calculations, or even insert my own numbers? The -as parameter appears to be some sort of spatial smoothing filter. When I use it on DAYFACT daylight factor contour maps, is there a danger of getting wrong answers by using too much smoothing? I also feel that the only way I'm really going to get things to work properly is to sit down with someone who is already using RADIANCE for daylighting work. Any suggestions of people I can pester on this side of the Atlantic? Best Wishes, Bob Everett - alias RCE1001@cus.cam.ac.uk The Martin Centre Cambridge University Dept.of Architecture, 6, Chaucer Rd., Cambridge .U.K. TEL +44-223-332981 FAX +44-223-332983 Date: Wed, 19 Jan 94 10:32:03 PST From: greg (Gregory J. Ward) To: rce1001@cus.cam.ac.uk Subject: Re: RADIANCE QUESTIONS Hi Bob, It just so happens that one of the best experts on using Radiance for daylight calculations works at Aberdeen University there in G.B. His name is John Mardaljevic, and his e-mail is "j.mardaljevic@aberdeen.ac.uk". Just offhand, I'm not sure what is going wrong in your calculations. To answer your questions, though, you don't need to use two surfaces for your window as long as you use the type "glass", but you do need to make it's surface normal face inwards if you want to use mkillum with it. Mkillum does not make the window into a point source, though it may look that way in rview, which approximates large sources as points to save calculation time. You may change this and other behaviors using the various rview options -- -ds .3 will get it to treat area sources as area sources. You should familiarize yourself with the "rad" program for rendering -- it sets a lot of these options for you. The format of the illum.dat files is mysterious. I can't even tell you which values correspond to which points, as it is a rather strange coordinate mapping. (Uneven increments of the polar angle.) The -as option associated with the renderers merely improves the accuracy of the indirect calculation in spaces with a lot of light variability. It is not a smoothing option. -Greg Date: Mon, 24 Jan 1994 16:05:02 +0000 (GMT) From: "Dr R.C. Everett" Subject: RADIANCE QUESTIONS To: Greg@hobbes.lbl.gov Dear Greg, Thanks for coming back on my questions. I've already talked to John Mardelevic. Unfortunately his new job in Aberdeen is not involved with Radiance, so he's not really in a position to help with the kind of detailed questions I want to ask. His main advice to me was to learn by experiment, which I can see taking a long time. This is difficult for me because I have project deadlines and other such administrative nonsense to meet. Can you send me the e-mail address of the people in Switzerland? Perhaps I can press you again on the format of the illum.dat files, because that would allow me to isolate my problems to either the inside of the room or the sky model. I shouldn't worry about it being in obscure coordinates - we have people here who have to teach this sort of thing. I have tried the Rad script. In many ways it is very useful, but in other ways it obscures the processes going on underneath and it is these that I am trying to understand. My first attempts at using it just set off some 54-hour renderings, whereas I just wanted something that would run overnight. The most useful information has been the tables of settings that you sent out in answer to other e-mail questions. This has enabled me to make my own best pick of values to give reasonable quality. It would help if there was a layman's guide to what the various rpict options do. Since we are involved in producing daylighting teaching material and daylighting design competitions, we really are interested in finding out about daylighting software. We would also like to be in a position to help develop it for simple design applications. If there is any descriptive material, on paper, rather than in electronic form, that might help us, we will happily pay for photocopying and postage, I look forward to hearing from you, Best Wishes, Bob Everett alias rce1001@bootes.cus.cam.ac.uk Date: Mon, 24 Jan 94 09:44:53 PST From: greg (Gregory J. Ward) To: rce1001@cus.cam.ac.uk Subject: Re: RADIANCE QUESTIONS Hi Bob, Alas, there is nothing written in layman's terms, or any terms for that matter, on the options to rpict. The only available documentation is what you have. My best advice is to read the rpict manual page very carefully, and ask me to clear up the points or topics giving you trouble. The ultimate reference of course is the C code (yuck). [Just an aside, I have since updated the notes on setting rpict options in the ray/doc/notes/rpict.options file in release 2.4.] I have been wanting to write a better description of rpict for years now, but have found no source of funding that would allow me the time to do so. If it's any consolation, even I can't guess how long a rendering's going to take without some experimentation. The file src/gen/illum.cal contains the definitions for the mkillum data file's coordinate system. Unless you are using mkillum on a sphere, the one's you need to decypher are il_alth and il_azih: il_alth = sq(-Dx*A7-Dy*A8-Dz*A9); il_azih = norm_rad(Atan2(-Dx*A4-Dy*A5-Dz*A6, -Dx*A1-Dy*A2-Dz*A3)); norm_rad(r) = if( r, r, r+2*PI ); These expressions look worse than they really are. For example, il_azih simply gives the azimuthal angle (in radians) using the argument unit vectors [A1 A2 A3] and [A4 A5 A6]. The altitude coordinate is more unusual. It says that the coordinate index is equal to the square of the dot product between the ray direction and the local surface normal (which is given by [A7 A8 A9] in this case). What's even stranger, you will notice that the range of this coordinate is 0 and 1, but in the data file the range is somewhat different. This is to provide proper interpolation and extrapolation for data values in the normal direction and on the "horizon". The actual data in the mkillum files are radiance values in watts/sr/m^2. Multiply the values by 179 to obtain luminance in cd/m^2. Note that the actual output of a flat illum surface is the radiance or luminance multiplied by the projected area, that is, the surface area times the cosine to the normal. The answer is in watts/sr or cd (depending on your choice of units). Hope this helps. -Greg P.S. The address of Raphael Compagnon in Switzerland is "compagnon@eldp.epfl.ch". From: Stuart Lewis Subject: RADIANCE Question To: gjward@lbl.gov Date: Thu, 31 Mar 1994 16:38:22 -0500 (EST) Dr. Ward, I am using Radiance2.3 to investigate elements of daylight design. I am most interested in extracting illuminance information from Radiance scenes, and as such would appreciate any pointers you might have to documentation beyond that found in the Radiance distribution and the tutorial that would help me to make better use of RTRACE to write scripts, etc. in which I (and more importantly, my advisor!) can have some confidence. Any other leads in this area would be greatly appreciated. Thanks for your assistance. I would have posted this request to the Radiance mailing list instead, but have not yet received any confirmation of my enrollment (via newuser.) Stuart Lewis stuart@archsun.arch.gatech.edu Georgia Tech Date: Thu, 31 Mar 94 14:19:05 PST From: greg (Gregory J. Ward) To: stuart@archsun.arch.gatech.edu Subject: Re: RADIANCE Question Hi Stuart, You are on the mailing list. Normally, I do not send out confirmation to new users. Also, the mailing list is for my postings of user mail only, not for general questions (which you should send to me directly). If you want to see back issues of the "Radiance Digest," they may be found in the /pub/digest directory on the anonymous ftp account of hobbes.lbl.gov. Using rtrace is a bit tricky, since it is a fairly general interface to the Radiance calculation. You can see an example of how it might be applied in the file ray/src/util/dayfact.csh. Together with cnt, rcalc and total, rtrace may be used to calculate almost any lighting metric. A very simple application where illuminance (in lux) is calculated may be found in the file ray/src/util/rlux.csh. The way this script works is like so: % ximage picture_file | rlux [rtrace options] octree Typing 't' (or hitting the middle mouse button) at a point on the image will cause the illuminance at that point to be printed on the standard output. If you want more help, I'd be glad to assist you in a specific example, or check the work that you've done. -Greg ========================================================================== SPECULAR APPROXIMATION Date: Mon, 22 Nov 93 10:28:38 MST From: jmchugh@carbon.lance.colostate.edu (Jon McHugh) To: GJWard@lbl.gov Subject: Fresnel's relations Hi Greg, In your discussion of Radiance 2.3 you say that "Removed Fresnel approximation to specular reflection from Radiance materials, since the direct component was not being computed correctly. This may have a slight affect on the appearance of surfaces, but it can't be helped." How are the Fresnel relations approximations? I thought they are a direct application of Maxwell's Equations. Is there some problem with using the Fresnel relations to calculate angular properties of dielectrics such as glass and many specular building materials? ______________________________________ | Jonathan R. McHugh | | Dept. of Mechanical Engineering | | Colorado State University | | Fort Collins, CO 80523 | | (303) 491-7479 | | (303) 491-1055 FAX | | jmchugh@carbon.lance.colostate.edu | --------------------------------------- Date: Mon, 22 Nov 93 09:42:48 PST From: greg (Gregory J. Ward) To: jmchugh@carbon.lance.colostate.edu Subject: Re: Fresnel's relations Hi Jon, There is nothing wrong with the Fresnel relations! I still do use them (in one form or another) for the types dielectric, interface and glass. (Glass uses an exact solution to the infinite series resulting from internal reflections in a pane of glass, and includes polarization as it affects transmittance and reflectance.) I used to use an approximation to the "unpolarized" Fresnel reflectivity function in my normal materials that used an exponential fit I discovered (and probably not for the first time). This formula was an approximation, and was used only because it provided a faster calculation with little loss in accuracy. I don't use any Fresnel term in these materials anymore, which is a shame, but it caused a measurable inconsistency in the calculated results. -Greg ========================================================================== MEMORY USAGE From: apian@ise.fhg.de Subject: rpict memory statistics ? To: gjward@lbl.gov (Greg Ward) Date: Wed, 24 Nov 1993 06:30:05 +0100 (MEZ) Hi Greg, just curious: you ever wondered where the memory goes in rpict ? This sounds like an offensive question, which it isn't. I also see that rpict is probably one of the best optimized raytracers around. Just wondering where the 40MB size of my running rpicts come from, there's an 8MB picture, some redicolous amount of books and 25 or so function files for windows generated by mkillum. Not that 40MB is something to really worry about, just that 32MB machines do choke a bit... (not a high prio request) TIA Peter -- ---------------------------------------------------------------------- Peter Apian-Bennewitz apian@ise.fhg.de Fraunhofer Institute for Solar Energy Systems Tel +49-761-4588-123 (Germany) D-79100 Freiburg, Oltmannsstrasse 5, Fax +49-761-4588-302 ---------------------------------------------------------------------- Date: Tue, 23 Nov 93 22:43:31 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: rpict memory statistics ? Hi Peter, A large picture will definitely occupy a lot of memory, since the data routines treat it the same as any data file and each pixel gets put into three floats, thus takes up at least (width*height*4*3) bytes of memory. Best to work with small images for this reason... Also, my malloc() routines are not as efficient space-wise as they could be, using up to twice as much core as is actually required (typically 25% waste). This is usually compensated by my use of bmalloc() which is 100% space-efficient for most of my allocations. Not used for data arrays, though (sorry). Unless you are using some outrageous options for mkillum, it's doubtful that the data files it produces use a significant amount of memory. For large, complicated models, the surface identifiers can end up taking a fair bit of core, then are almost never accessed! For this reason, I put them into their own large blocks of memory that are the first to get swapped out when the going gets rough. (See common/savqstr.c.) An rpict process can grow quite large with the interreflection calculation, especially if -ab is 2 or higher and the scene has a lot of detail geometry. Excluding trees and stuff from the calculation is a good idea using the -ae and/or -aE options. The rest of the memory is used to store the geometry and the octree, and is about as efficient as I can make it. You can try recompiling everything with the -DSMLFLT option, which reduces scene storage space by around 30%, but this has many drawbacks in rendering accuracy as cracks may appear in scenes with large ranges of dimensions. Hope this helps. I wrote Radiance on an 8M machine, so I remember what it's like to have limited memory. (An oxymoron?) -Greg From: apian@ise.fhg.de Subject: Re: rpict memory statistics ? To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Wed, 24 Nov 1993 08:16:20 +0100 (MEZ) Hi Greg and a yawnful good morning, astonishingly its getting light outside... thanks for the fast reply, I fancy its the scene complexity (ab=1, 768x576 pixel). The machine doesn't come to a complete standstill, so some fairly large amount gets swapped out and stays out most of the time. I'll look into this before doing the next attempt for a movie. thanks again, have a nice evening, Peter From: "Mr. A. Morris" Subject: oconv query part 2 To: greg@hobbes.lbl.gov Date: Fri, 3 Dec 1993 09:40:20 +0000 (GMT) Dear Greg Thanks very much for your reply, 1st Dec to my initial query about the "out of octree space" system error I've been encountering. Unfortunately I haven't resolved it yet. I've tried running the command as a batch job on one of the larger machines when I did encounter the message "out of octree space- out of memory" However once I ran it again as a unlimited memory and filesize job the error reverted to the usual "out of octree space" again. I have just installed the lasest version of radiance and included in the installation the option for scenes with a large no. of objects, but this doesn't seem to have had any effect on the problem. In your reply you stated, "If not, then you may be running up against the internal limits to the octree size. To change them, you need to rerun makeall install, changing the rmake command to include -DBIGMEM as one of the MACH= options. At any rate, you shouldn't start having this sort of problem until your scene has 10,000+ polygons" I am unsure how to estimate the no. of polygons that my scene may have could you explain roughly how to estimate it, ie does a cube have 6 polygons at its simplest and how many will a cylinder have? My current scene description has about 600 objects, am I right in thinking that with that no. of objects it could be possible to exceed the internal limit of 10,000 polygons? Could you also explain more simply the process of "changing the rmake command to include -DBIGMEM as one of the MACH= options" I'm a novice at this sort of thing! With respect to the "artificial sky" question, mine are being generated by the "torad" autocad to radiance converter which I've been using. I presume that it is generated using the gensky command as I am prompted to give the time, date, lat. and long. etc. I hope you can explain the procedure for changing the rmake command, I'm getting desperate to complete this work! yours faithfully Alex Morris. Date: Fri, 3 Dec 93 09:21:39 PST From: greg (Gregory J. Ward) To: jabt282@liverpool.ac.uk Subject: Re: oconv query part 2 Hi Alex, If you are using AutoCAD, it may create many surfaces per cylinder when only one is actually required (a Radiance cylinder). Unfortunately, AutoCAD only produces polygons. There should be a parameter somewhere that would allow you to change the number of facets a given volume is broken into. Cubes and other polyhedral objects shouldn't be affected, but it's a good idea to break small curved objects into the smallest number of polygons you can live with. To change the compile parameters, run "makeall install" again, this time say "yes" when asked if you want to change the rmake command. Then, change the MACH= line by adding -DBIGMEM and change the OPT= line by removing -DSMLFLT. (This is probably what's causing your results to look speckled.) Makeall will do the rest. With -DBIGMEM defined, it is unlikely that you will run into internal limits before you run out of swap space. I'm not sure what kind of artificial sky torad generates since I've never used it, but it sounds as though it's running gensky all right. The sky shouldn't affect your octree size, therefore. -Greg ========================================================================== SGI GAMMA Date: Thu, 9 Dec 93 16:44:52 GMT From: ann@graf10.jsc.nasa.gov (ann aldridge) Apparently-To: greg@hobbes.lbl.gov Hi Greg, We have been collecting beautiful pictures from the HST repair mission. I will send you a three pictures which we hope to work on. ... 2. Image of HST. This picture appears to be lit only by payload bay lights. Hopefully, without sun we can reproduce lighing. Have just begun work on this picture. See pic2real.pic. This picture looks terrible as a pic file. See pic2real.rgb for better image if you can display this format! ... Do you want the radiance files to work on getting better images? We would welcome any suggestions you have. Thanks, Ann Date: Thu, 9 Dec 93 10:08:31 PST From: greg (Gregory J. Ward) To: ann@graf10.jsc.nasa.gov Subject: pictures Hi Ann, The .rgb version looks the same as the .pic version on my display. The fact that yours doesn't means either that the gamma value associated with the original file is not the default 2.2 that ra_tiff expected, or that you have not set your GAMMA environment variable properly for ximage to work. SGI's are weird because they have something called the system gamma (stored in /etc/config/system.glGammaVal and set by the gamma program) that is a partial correction factor for the natural gamma of the monitor. This value is not the gamma that you get, which is: Gf = Gm/Gs where: Gf = final gamma, Gm = natural monitor gamma, Gs = system gamma set by "gamma" program The final gamma, Gf, is what you should set your GAMMA environment variable to. Most monitors have a natural gamma around 2.0, and most SGI systems set the gamma value to 1.7, so the Gf number is around 1.2. Personally, I don't like all this fussing with the natural gamma of the monitor because I believe it introduces quantization errors, so I set the system gamma to 1, like it is on non-SGI systems. Anyway, you can find out what the final gamma of your display is directly by using the picture ray/lib/lib/gamma.pic, like so: ximage -b -g 1 gamma.pic Just back up and make the closest match between the brightness on the left and the one on the right, and take the corresponding number as the combined gamma. Then, put a line like: setenv GAMMA 1.2 in your .login or .cshrc file. In the next release, you should be able to use the radiance.gamma resource on your X11 server to accomplish the same thing. In my opinion, the powers that be should have worked out these problems ages ago. It's a bit sticky, though, since monitor gamma changes with brightness and contrast settings... -Greg ========================================================================== PROGRAM NAMING From: Kurt.Jaeger@rus.uni-stuttgart.de (Kurt Jaeger aka PI) Subject: Radiance 2.3 and the calc binary To: GJWard@lbl.gov Date: Fri, 24 Dec 1993 11:55:27 +0100 (MEZ) Hi! During the installation of the radiance 2.3 software, a binary called calc was installed as well. While it is a very useful tool, I'd like to know where it is used by other radiance binaries. We have established a common software installation scheme and the name for the calc binary collides with the recently posted calc-2.92. So I'd like to know whether calc is a required binary for radiance. Otherwise it would lighten Your burden of administrating the distribution by not including calc with radiance because there is a different calc out there 8-) So short, PI -- PI at the User Help Desk Comp.Center U of Stuttgart, FRG 27 years to go ! EMail: pi@rus.uni-stuttgart.de Phone: +49 711 685-4828 (aka Kurt Jaeger) Date: Fri, 24 Dec 93 06:55:11 PST From: greg (Gregory J. Ward) To: Kurt.Jaeger@rus.uni-stuttgart.de Subject: Re: Radiance 2.3 and the calc binary Hi Kurt, You're right that calc is not an essential part of the distribution, but then there are a lot of non-essential things in the distribution. What is this calc-2.92 you mention? The name "calc" also collides with a program on the PC under Windows, so it may be time to come up with a new name. Any suggestions? -Greg Subject: Re: Radiance 2.3 and the calc binary To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Fri, 24 Dec 1993 16:31:30 +0100 (MEZ) Hi! > You're right that calc is not an essential part of the distribution, but > then there are a lot of non-essential things in the distribution. > What is this calc-2.92 you mention? I'll attach the intro file of the software below (its calc-2.9.0, actually, I remembered the wrong version). It was posted a few days ago in some of the sources groups. > The name "calc" also collides with a program > on the PC under Windows, so it may be time to come up with a new name. I would suggest that You try to limit the radiance distribution to the essential stuff. You can factor out the additional stuff and maybe even distribute it seperatly. I think cnt, total etc are good programs, its just that I'd prefer to keep things seperate, if possible. Are these programs in any way essential, because they are called by the core radiance binaries ? Otherwise the namespace of the programs that can live in $PATH will be cluttered sometime in the future. Ok, this is no problem for You as maintainer of radiance, but I already begin to sense the problems we have locally with the name space of $PATH. You can ask the authors of calc-2.9.0 if he wants to merge the two calc's (and their additional tools) into one package ? Their EMail adress is: dbell@canb.auug.org.au chongo@toad.com I'd be interested in hearing about Your decision on this aspect. Thanks, PI ---intro-calc-2.9.0--- Quick introduction This is an interactive calculator which provides for easy large numeric calculations, but which also can be easily programmed for difficult or long calculations. It can accept a command line argument, in which case it executes that single command and exits. Otherwise, it enters interactive mode. In this mode, it accepts commands one at a time, processes them, and displays the answers. In the simplest case, commands are simply expressions which are evaluated. [Stuff deleted.] Kurt Jaeger Date: Fri, 24 Dec 93 08:12:09 PST From: greg (Gregory J. Ward) To: Kurt.Jaeger@rus.uni-stuttgart.de Subject: Re: Radiance 2.3 and the calc binary Just one other question. Since the user community for Radiance on any given machine is probably small, why not put all the binaries in a separate directory and only include that in the path of those who really need it? That's the usual way of handling large software packages, after all. From: Kurt.Jaeger@rus.uni-stuttgart.de (Kurt Jaeger aka PI) Subject: Re: Radiance 2.3 and the calc binary To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Fri, 24 Dec 1993 23:04:42 +0100 (MEZ) Hi! The binaries for different packages *are* in seperate directories. But we provide a common directory called /client/, where the subdirectories bin/, lib/ etc are accessed by the users through $PATH. > That's the usual way of handling large software packages, after all. Yes, You're right. We are working on a scheme where the end users, the biggest part of a community, have nothing special to call or do to access as many packages as possible. So, providing something like a getpub procedure can be done, but is not preferred. So short, PI (aka Kurt Jaeger) ========================================================================== RVIEW SURFACE NORMALS Date: Thu, 6 Jan 94 17:55:07 -0500 From: phils@boullee.mit.edu (Philip Thompson) To: greg@hobbes.lbl.gov Subject: Rview suggestion Greg, Here's one for the suggestion box. It would be nice if in rview the user could somehow get the orientation of a surface surface. A quick way would be to just have a dot product of the incident ray and surface normal. This would give a indication of whether or not a surface was facing the viewer or not. It's not easy, if not impossible, to have control of how polygons are written from a cad modeler. This option would make life easier in those cases where it matters. Thanks, Philip Date: Thu, 6 Jan 94 15:02:05 PST From: greg (Gregory J. Ward) To: phils@boullee.mit.edu Subject: Re: Rview suggestion Hi Philip, Sure, that sounds easy enough. All the information is available within the trace routine, but I never thought the surface normal was much use since it's such a confusing quantity. How would it be if I gave both the surface normal and the angle to the incoming ray (computed from the dot product)? I think this might be the most meaningful combination. -Greg Date: Thu, 6 Jan 94 18:17:43 -0500 From: phils@boullee.mit.edu (Philip Thompson) To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: Rview suggestion Yes, I was thinking of something along those lines. The problem I and other people run into is when we make windows in a cad model. When we want to turn windows into illum sources then orientation matters. So to simplify this its seems the easiest solution would be to stand inside a room and point at the windows and get some feedback as to their orientations. You could call it "n" or "o". - Philip Date: Thu, 6 Jan 94 15:47:29 PST From: greg (Gregory J. Ward) To: phils@boullee.mit.edu Subject: Re: rview suggestion As a final compromise, I added a check so if the ray hit the back side of the material, the trace command mentions that fact, without going into detail about the intersected angle and surface normal. -G ========================================================================== WATER From: phils@MIT.EDU Date: Tue, 11 Jan 1994 14:54:19 -0500 To: greg@hobbes.lbl.gov Subject: Material for water Greg, What is the best material for water when modeling a pool? It seems when I use a dielectric I can't see the bottom. What is the difference between the two materials in this case? I'll send you a sample pool.rad below. Thanks, Philip # pool.rad void plastic white_paint 0 0 5 .5 .45 .4 0 0 # pool water void texfunc wavy 6 wave_x wave_y wave_z wave.cal -s .25 0 1 .05 wavy dielectric wavy_water 0 0 5 .9 .9 .91 1.33 0 # wavy glass wavy_water # 0 # 0 # 4 .9 .9 .91 1.33 # genbox white_paint pool 3 5 1 [etc...] Date: Wed, 12 Jan 94 10:03:20 PST From: greg (Gregory J. Ward) To: phils@MIT.EDU Subject: Re: Material for water Hi Philip, The difference between dielectric and glass is that glass imitates a thin pane (or whatever) made of glass, and dielectric represents an interface between air and some dielectric material. Glass is only appropriate for windows or other thin glass surfaces, and more efficient in those cases than two opposite-facing surfaces of type dielectric. Dielectric is the only type to use for thick bodies of transparent media, such as a crystal ball or a swimming pool. (There is also the type 'interface', which is appropriate for a boundary between two dielectric media, neither of which is air.) Unfortunately, lighting inside a dielectric medium is none too easy. The chief problem within Radiance is the difficulty of finding the light source (e.g. the sun) from the bottom of the pool. The only person I am aware of who's tackled this problem successfully is Mark Watt, and he wrote a paper about it in the 1990 Siggraph proceedings. You might be able to get Radiance to find the sun for a still body of water using the 'prism2' type, but you have to know what you're doing. How important is this to you? If you're just going for looks and not very interested in physical accuracy, go ahead and use 'glass'. At least you will be able to see the bottom of your pool that way. -Greg ========================================================================== STAR FILTER Return-Path: Date: Sat, 12 Feb 1994 14:24:30 -0500 From: srouten@rubidium.service.indiana.edu To: greg@hobbes.lbl.gov Hi Greg, Reuben and I are working on a short animation which consists of some lights simply flying by an object. The trouble is, we'd like to use the star filter, which seems to rule out an absolute exposure for every frame. I read digest v2n4 where you mention, in the VIDEO section, that you could provide some information on dynamic exposure, so that is what i'm asking for. Thanks in advance, Scott Date: Mon, 14 Feb 94 10:42:45 PST From: greg (Gregory J. Ward) To: srouten@rubidium.service.indiana.edu Subject: exposure Hi Scott, Well, I guess we need ANOTHER option for pfilt! Actually, you can just do a two-pass filtering, and follow it up with a one-pass run to bring the exposure back to where you want it. Unfortunately, this is the only easy solution right now. To find out what exposure pfilt actually used, run getinfo on the filtered image and look at the EXPOSURE= line. You can use the following command to go from the exposure you got to the value "1.7". Of course, you may substitute whatever you like for the '1.7' value: % pfilt -2 [options] orig.pic > filtered.pic % pfilt -1 \ -e `getinfo corrected.pic Be sure to pay attention to back-quotes vs. regular quotes! -Greg ========================================================================== GLOW Date: Wed, 16 Feb 1994 01:41:43 -0800 From: COURRET@sc2a.unige.ch To: greg@hobbes.lbl.gov Subject: glow material Hi greg, I am having some difficulties to understand the definition of the glow material. From manuel Ray.1 yoou wrote the 8/12/93, i can see that the difference between the light and the glow materials is that the glow one is limited in its effect. Could you, please, quantify this limitation ? In the tutorial named tutorial.1, you describ a standard sky and ground to follow a gensky sun and sky distribution. Why is the sky modelized with a glaw material and not a light one as it is the case for the sun? Gilles. Date: Wed, 16 Feb 94 10:21:32 PST From: greg (Gregory J. Ward) To: COURRET@sc2a.unige.ch Subject: Re: glow material Hi Gilles, The limitation of the glow type is determined by the setting of the distance (fourth) parameter. Any calculation point further than this distance from a glow source will not look to the source for direct illumination. The glow source will, however, contribute in the interreflection calculation if -ab is 1 or greater. If a distance of 0 is used, or the source is at infinity (such as the sky), then a glow can only contribute indirectly. If we used a light type for the sky, Radiance would not be able to sample it adequately, because it is much too large. Light sources must be reasonably well localized in order to act properly in the direct calculation. The indirect calculation, on the other hand, works very well for widely distributed illumination sources. I hope this clears things up for you a little. -Greg ========================================================================== TMESH2RAD [The following discussion relates to the creation of the new tmesh2rad translator.] Date: Fri, 18 Feb 94 20:19:59 GMT From: ann@graf10.jsc.nasa.gov (ann aldridge) Apparently-To: greg@hobbes.lbl.gov Hi again, We do not usually use WaveFront format. We have an internal format, but can convert to various other output formats. Below is a short sample of the file I currently use (included for you entertainment):DEFAULT is material name. first 3 numbers are vertex, last three are normal. triangle DEFAULT -14.673 -3.119 50 -0.95677 -0.203374 1.17936e-10 -14.673 3.118 50 -0.94236 0.271166 -1.6951e-10 -14.673 3.118 -50 -0.97118 0.135583 -8.47551e-11 triangle DEFAULT -14.673 3.118 -50 -0.97118 0.135583 -8.47551e-11 -14.673 -3.119 -50 -0.95677 -0.203374 1.17936e-10 -14.673 -3.119 50 -0.95677 -0.203374 1.17936e-10 triangle DEFAULT -14.673 -3.119 50 -0.95677 -0.203374 1.17936e-10 -14.673 -3.119 -50 -0.95677 -0.203374 1.17936e-10 -12.136 -8.817 -50 -0.791363 -0.574922 4.84915e-10 .....etc. I was being lazy using WaveFront because it was so similar to your format. To get the vertex normals I will have to convert format above. Most WaveFront objects we use are not triangles, so you probably should keep your format. I can work with it. (We do have a routine to triangularize a WaveFront file.) I am still working on recompiling. Then I will make a new cylinder file with normals. Thanks again, Ann. Date: Thu, 17 Feb 94 12:41:35 PST From: greg (Gregory J. Ward) To: ann@graf10.jsc.nasa.gov Subject: suggestion Instead of putting all the vertices before the triangles, it would be easier to convert your format as follows. The original: triangle DEFAULT -14.673 -3.119 50 -0.95677 -0.203374 1.17936e-10 -14.673 3.118 50 -0.94236 0.271166 -1.6951e-10 -14.673 3.118 -50 -0.97118 0.135583 -8.47551e-11 triangle DEFAULT -14.673 3.118 -50 -0.97118 0.135583 -8.47551e-11 -14.673 -3.119 -50 -0.95677 -0.203374 1.17936e-10 -14.673 -3.119 50 -0.95677 -0.203374 1.17936e-10 would become: m DEFAULT v 1 -14.673 -3.119 50 n -0.95677 -0.203374 1.17936e-10 v 2 -14.673 3.118 50 n -0.94236 0.271166 -1.6951e-10 v 3 -14.673 3.118 -50 n -0.97118 0.135583 -8.47551e-11 t 1 2 3 m DEFAULT v 1 -14.673 3.118 -50 n -0.97118 0.135583 -8.47551e-11 v 2 -14.673 -3.119 -50 n -0.95677 -0.203374 1.17936e-10 v 3 -14.673 -3.119 50 n -0.95677 -0.203374 1.17936e-10 t 1 2 3 Notice that the same vertex id's are reused. This is an advantage of my format -- you can save memory and confusion by grouping your triangles and vertices however you like. The 'm' command of course did not need to be repeated in this case, but that way your translator can be a little simpler. The final output of tmesh2rad is the same. If you want, you can use rcalc instead of writing a translator, like so: % rcalc -i inp.fmt -o out.fmt orig_file | tmesh2rad > rad_file Where inp.fmt contains: triangle $(mat) $(x1) $(y1) $(z1) $(nx1) $(ny1) $(nz1) $(x2) $(y2) $(z2) $(nx2) $(ny2) $(nz2) $(x3) $(y3) $(z3) $(nx3) $(ny3) $(nz3) And out.fmt contains: m $(mat) v 1 $(x1) $(y1) $(z1) n $(nx1) $(ny1) $(nz1) v 2 $(x2) $(y2) $(z2) n $(nx2) $(ny2) $(nz2) v 3 $(x3) $(y3) $(z3) n $(nx3) $(ny3) $(nz3) t 1 2 3 If I understand your format correctly, this should do it. -Greg ========================================================================== LUMINAIRE MODELING Date: 18 Feb 1994 17:24:32 -0400 (EDT) From: "Richard G. Mistrick" Subject: Tubular luminaires in radiance To: greg@hobbes.lbl.gov Greg: We are getting some good use out of Radiance. I have a simple question as to how to best model tubular luminaires. Below are a few questions: 1. The photometry is general performed relative to the center of the luminare. As I understand it, you generate a luminous rectangle or two (horizontal plane) that emit light upward and downward. How can I put these at the luminous center of a tubular luminaire? All that I've found to date is that I can use an illum source, but must put it outside the luminaire. 2. How can I make the luminous element of the luminaire (which may be curved) appear bright without contributing additional light to the space? I would appreciate any tips related to the above that you can pass along. Thanks, Rick Mistrick Date: Fri, 18 Feb 94 14:50:53 PST From: greg (Gregory J. Ward) To: RGMARC@ENGR.PSU.EDU Subject: Re: Tubular luminaires in radiance Hi Rick, I was reading your e-mail and thinking to myself, "Gee, this guy really knows his stuff." I didn't notice your name until I got to the end... Yes, you have hit on a sticking point. The solution of putting a rectangle above and below the luminaire is a marginal one, and is used by ies2rad mostly because the IES format doesn't give much of a clue as to the true fixture geometry. 1. When you say you have a "tubular luminaire," do you mean a cylinder? If in fact you had a cylinder whose entire area emitted light, you could model this as a cylindrical light source directly in Radiance (with or without an associated intensity distribution). If, as I suspect, only part of your tube is luminous, you may need to use an illum that is outside the actual luminaire. This illum may composed of one or more polygons, or you could use a cylinder larger than the actual fixture. The computed emission will be from the whole illum surface, so you shouldn't make it too much larger than the actual radiating area. 2. Use the material type "glow". If you give a small positive value as the influence distance (fourth real parameter), then the assigned surfaces will illuminate local objects (ie. other parts of the luminaire). If the value is zero, the assigned surfaces will be visually bright without having any direct contributions. In any case, the (invisible) illum surface will block any rays from reaching the glow surface, so the luminaire can't be overcounted. Luminaire modeling is definitely an art, and a rather painful one at that. I wish the IES computer committee was a little more focused on the future of full geometric models and near-field photometry and a little less focused on database issues, but I shouldn't complain if I won't volunteer... -Greg ========================================================================== GENSKY Date: Fri, 4 Mar 94 08:01:34 EST From: TCVC@ucs.indiana.edu Subject: Re: water swells To: greg@hobbes.lbl.gov Fractals can be fun.. yes it will take a while to "hone" in on the right effect, but thanks for opening the door for me! This might sound awefully naive from a "long time explorer " of Radiance, but I need to run a daylight image (first one!!!!).. I feel like a vampire that has come out into the sun. All of our geometry is based on : +z = north +x = west +y is up Say its at your location or Hong Kong for that matter, How do I twist gensky to line up with my coordinate system? And where is the suggested "ambient" value found? -Rob (I will have a series of images that I think you will enjoy. Some are highly theatrical from my Graduate Design Students in THeatre, and I am working on a building top that should be quite interesting!! Will post when all are complete) From greg Fri Mar 4 09:16:15 1994 Return-Path: Date: Fri, 4 Mar 94 09:15:50 PST From: greg (Gregory J. Ward) To: TCVC@ucs.indiana.edu Subject: Re: water swells Status: R Hi Rob, To change gensky's output from its default coordinates (+Y = North, +X = East, +Z = up), use xform. The easiest way for me to think about it is to imagine what the sky looks like in my coordinate system when first output by gensky, then move it where it ought to go. When gensky is run, it produces a sky whose North direction is pointing straight up instead of in the +Z direction like we want it, and whose other directions are wrong as well. To rotate gensky's North from our up (+Y) to our North (+Z), we need to rotate about our X-axis by +90 degrees (using the right-hand rule to determine the sign). Once we have done this, the original gensky X orientation will still be wrong, pointing East instead of West. We therefore rotate about the Z-axis by 180 degrees (sign doesn't matter for this particular rotation). So, the command is: gensky Month Day Time [options] | xform -rx 90 -rz 180 Note that the order of rotations is important. There are many other equivalent rotations, also. I look forward to seeing more examples of your work and your students'. -Greg Date: Fri, 4 Mar 94 09:37:23 PST From: greg (Gregory J. Ward) To: TCVC@ucs.indiana.edu Subject: P.S. I forgot to answer the part about Southern latitudes. Simply apply the -a option to gensky, giving a negative latitude to indicate the number of degrees below the equator. From: apian@ise.fhg.de Subject: gensky (take #645) To: gjward@lbl.gov (Greg Ward) Date: Thu, 24 Feb 1994 21:03:15 +0100 (MEZ) Dear Greg, Anne Kovach and myself are wondering for the nth time what the hack genksy is producing. .... Hi Greg, Im sitting here with Peter and we were discussing some of my calculation results in figuring out what gensky is doing. I was/am testing how well Radiance is suited (ie. gensky) in determining the Irradiation (rtrace -I option I used for this case) on an area. To test the values from gensky , I simulated the global irradiation on the horizontal with gensky over an entire year for the Freiburg location. The values that I got were verz very low. Then I thought maybe the 3 values out from rtrace should be summed since these are the 3 channels. SO I summed them and got a value 3 times larger. (oh... [P. editorial remark]) When I compared these values of +s, -s, +c, -c (4 simulations each with a different distribution) with the TRY data from Freiburg, the data IN ALL CASES was still too low -- although I multiplied by 3!! (Thismultiplication by 3 I learned is not correct -- via Peter.) ("Each channel can be thought of in terms of watts/m2/steradian/"bandwidth". In other words, going from a three-color measure of radiance to a monochromatic measure, we wouldn't add up the three channels to compute the total radiance, we would take some average. This simplifies computations quite a bit." [ from your mail 1 month ago, P.] ) THe main points of this simulation which were puzzling are asfollows: 1. Why are the Irradiation values (units + W/m2 I am assuming so low?) eg. June in Freiburg is 38 kWh/m2/month and in the TRY year it is around 169 kWh/m2/month. (THis was calculated with the +s option in this case.) 2. What can be done to make these values higher?? ie. WHat parameters could be changed? in sky_glow ? bandwidth values? (not just the RGB??) ( I guess your answer is adjusting the zenith brightness, just why ?? Adjusting CIE to local climate ??? P.) 3. FOr SF data, does the monthly measured data correspond to gensky? 4. Is multiplying by 3 such a bad idea?? P.S. THe only other option I used is -ab 1 for gensky, with , of course the skyfunc glow values as given in the Radiance tutorial . (Cindy Larson)i as follows: skyfunc glow sky_glow 0 0 4 .9 .9 1 0 sky_glow source sky 0 0 4 0 0 1 180 ( Yes, the glow problems, requiring ab=1, are known here - P.) As far as I (P.) understand, the basic problem we have is: a) we don't use the options correctly (unlikely) b) we don't understand some basic principles (well...) c) CIE is stuffed and/or requires local adjustments cheers Anne , Peter -- Peter Apian-Bennewitz apian@ise.fhg.de +49-761-4588-[123|302] Fraunhofer Institute for Solar Energy Systems, D-79100 Freiburg Date: Thu, 24 Feb 94 12:13:44 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de, kovach@ise.fhg.de Subject: Re: gensky (take #645) Hi Peter and Anne, The problems you are experiencing with gensky are probably due to the absolute difference between the zenith and solar radiances computed by the program and those of your TRY data. Daylight is a highly variable quantity, as you must know. Even if the CIE sky distributions were good approximations of realy skies (which they are not), the variation in absolute brightnesses is going to make for very poor agreement between measurements and simulations. The best thing to do is to use the -r (or -R) and -b (or -B) options of gensky to calibrate the absolute levels to match your weather data. Otherwise, gensky is going to use some default assumptions about atmospheric turbidity and mean solar brightness to come up with highly approximate brightness values. Note that a new option has been added to gensky in version 2.3 for the modeling of so-called "intermediate" skies. This may be more appropriate for many of your daylight conditions in Germany than either the default clear or overcast model. In the process of implementing this option, I also managed to wreck the calculation of ground plane brightness, and you should pick up the repaired version of gensky.c from hobbes.lbl.gov in the /pub/patch directory. However, from the description of your problem, the ground plane brightness is probably not the source of your disagreement. I hope this helps! -Greg From: apian@ise.fhg.de Subject: Re: RGB nm values ? To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Fri, 25 Feb 1994 15:33:53 +0100 (MEZ) Hi Greg, about gensky/CIE: (before you say "read my lips", I read your last mail... ) after a cross-check with x11image.c and falsecolor.csh, everyone here is convinced that the average (il)luminance has to be the weighted average of the RGB channels (and NOT the sum). (we had this discussion already) Ok, - so the CIE are aprox. a factor 3-6 too low. Question: Do you know from other chaps who checked gensky against reality and what's the factor between these too ? If gensky is good enough for Berkeley weather, a factor of 6 for Freiburg seems to be far out. 6 is for average monthly radiance, 3 is for daily luminance, both global radiation on a horizontal surface. as always, many TIA Peter -- Peter Apian-Bennewitz apian@ise.fhg.de +49-761-4588-[123|302] Fraunhofer Institute for Solar Energy Systems, D-79100 Freiburg From: apian@ise.fhg.de Subject: anne's factor To: gjward@lbl.gov (Greg Ward) Date: Fri, 25 Feb 1994 17:49:30 +0100 (MEZ) seems to be more on the 1.5 - 2.0 side, not so much a factor 6. user error gone unchecked. crunch. -- Peter Apian-Bennewitz apian@ise.fhg.de +49-761-4588-[123|302] Fraunhofer Institute for Solar Energy Systems, D-79100 Freiburg Date: Fri, 25 Feb 94 08:58:19 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: anne's factor I recently compared gensky's values to San Diego weather averages, and the gensky values seemed to be lower than measured averages by about 30%. The low values from gensky are probably due to a too-high turbidity value. You can lower it to correspond better with your weather, but the best way to proceed (as I said before) is to plug in your own values for zenith and solar radiance. There is no other way to get good correspondence with measurements. -G From: kovach@ise.fhg.de Subject: Radiance To: greg@hobbes.lbl.gov Date: Fri, 4 Mar 1994 11:50:18 +0100 (MEZ) Hi Greg, Thanks for your help and comments with gensky. I am presently investigating the effect of the individual parameters -R, -b and -t on the output of gensky in comparison with a Freiburg climate. In comparing the monthly values of Freiburg TRY weather data, the highest discrepancy between gensky +s and Freiburg TRY using the default values was 34 %, where TRY global horiz. was 34 % higher than gensky's values. The lowest amount of discrepancy occurred in Dec where TRY was only 8% higher than gensky. For the other options with no direct source, the TRY values were approx. 70-80 percent higher than those values generated by gensky. Since I dont want to blindly plug in different parameters that dont make any physical sense, I took a look in the source code for gensky. I also took a look at Superlite since it also uses a CIE sky distribution. There was a list of turbidity factors in the Superlite manual and the factors all seemed to be lower than 1. Therefore I only have 2 questions. 1) Did you use a specific turbidity model to estimate the turbidity factor? (eg. Linke Turbidity Factors?) What are the units in this case? (or are there no units?) 2) Could you recommend a reference where I could take a look at to get a better understanding of the meaning of the different values of these parameters and to help explain the role of these factors in the gensky model? Unfortunately Freiburg is not as full of bookstores as Berkeley! -- a great disadvantage! Your helpfulness is really appreciated. Ciao, Anne Date: Fri, 4 Mar 94 09:51:41 PST From: greg (Gregory J. Ward) To: kovach@ise.fhg.de Subject: Re: Radiance Hi Anne, I am disappointed to hear that your sky measurements are in disagreement with gensky's output. The default turbidity factor (reported by "gensky -defaults") is 2.75. This is Linke's turbidity factor, which by its place in our equation should have units of kcd/m^2. It is quite probable that a higher turbidity factor is called for in your climate. The formulas I use in gensky can be found around line 200 in gensky.c: if (overcast) zenithbr = 8.6*sundir[2] + .123; else zenithbr = (1.376*turbidity-1.81)*tan(altitude)+0.38; if (skytype == S_INTER) zenithbr = (zenithbr + 8.6*sundir[2] + .123)/2.0; For an overcast sky, I think I use the same formula used in SUPERLITE. I wish I knew where it came from, especially since it's so far off from your measurements. Perhaps this is not too surprising, though, since cloud cover varies considerably and one overcast day can be MUCH darker than another. For the clear sky luminance, I use the formula reported in a 1984 paper by Karayel, Navvab, Ne'eman and Selkowitz (Energy and Buildings, 6, pp. 283-291). This formula is apparently a simplified version of one proposed by Dogniaux relating turbidity to zenith luminance, and the factors were determined empirically from measurements of clear skies over San Francisco. Again, it may not be the most appropriate formula for your area. For intermediate skies, a recent addition to the program, I had no idea what to use for the zenith luminance so I just averaged the clear and overcast brightnesses together. Let me just emphasize once more that sky conditions and luminance patterns are highly variable, and predicting them is not an exact science. Obviously, the formulas used in gensky could use updating, and I've been trying to find an opportunity to do that. A lot of good data has been gathered by your group and others as part of the IEA International Daylight Measurement Year, and some of this data has even been digested and fitted to more advanced sky models. As I said before, the best you can do is to use your own values for zenith luminance (or the equivalent ground plane illuminance) and go from there. Even doing this, I think the standard CIE clear sky formula is starting to show its age. I should really be replaced by something that correlates better to real skies. If you want to use gensky's calculation of clear sky luminance for some reason, just adjust the turbidity factor up by 30% or so until the zenith luminances more closely match your measurements. There is nothing you can do other than override the default luminance value for overcast skies. Again, this is what I recommend you do for all sky types, anyway. -Greg ========================================================================== COMPILE PROBLEMS Date: Thu, 3 Mar 94 15:14:38 +0100 From: Education To: greg@hobbes.lbl.gov Subject: fun with a compiler Hi Greg ! Don't know if you still remember me, but I'm still working hard bringing Radiance to different plattforms. I don't know, if I told you, but at the moment I trying to compile it on a Intel Paragon, a 100 processor machine, that is supposed to be really fast. I finally got the accont, but now I have several problems with the compiler. For example, the Paragon doesn't know the call of CLK_TCK (it occors in rpict.c: 155, so I have to solve this problem). What happens, if I set to a fixed value, does the programm still work propably. Another question is, why do you define -DDCL_ATOF -DALIGN=double Is that really nessesary for every compiler or just for a specific one. And a last one: Since I only want to use the Paragon as a number cruncher, I only need to run rpiece ( and so I guess rpict ). Do you have any makefile, that is creating only the basic parts of Radiance, that would save me a lot of time. My problem is, that I don't know enough about your programm structure to remove single parts of the Rmakefiles thanks a lot, Daniel c\o Prof. Schmitt Architektur & CAAD HIL D 74.3, CH- 8093 ETH Zuerich E-Mail lucius@arch.ethz.ch phone +41 1 633 29 20 Date: Thu, 3 Mar 94 16:03:51 +0100 From: Education To: greg@hobbes.lbl.gov Subject: a few more questions Hi greg ! Here are a few more questions 1. Very often I get the following message. Do I have to take care of it ? PGC-W-0118-Function ebotch does not contain a return statement (calexpr.c: 274) PGC/Paragon Paragon Rel 4.1.1: compilation completed with warnings 2. Why do you redefine fabs: PGC-W-0221-Redefinition of symbol fabs (./defs.h: 346) PGC/Paragon Paragon Rel 4.1.1: compilation completed with warnings 3. Even more often I get : PGC-W-0095-Type cast required for this conversion Is that a special problem with the Paragon compiler or is it because of the complier options I told you about within the other E-Mail Thanks, Daniel Date: Thu, 3 Mar 94 11:12:16 PST From: greg (Gregory J. Ward) To: lucius@arch.ethz.ch Subject: Re: fun with a compiler Hi Daniel, The problem with CLK_TCK being undefined has been repaired, and the new version of rpict.c is in the /pub/patch directory on hobbes.lbl.gov. I found out after I wrote this code at the insistence of Peter Apian that there are many different ways that so-called System-V compatible and POSIX-compatibe UNIX implementations handle this stupid problem. The -DDCL_ATOF definition is usual for systems that declare atof() as double but don't necessarily put it in math.h. If atof() is declared as something strange or is a macro, then this option is left off. The -DALIGN=double is needed for proper compilation of the memory allocation routines (common/*alloc.c) on RISC architectures. Older CISC machines usually use int as the alignment type, which is the default. The warnings you are getting can probably be safely ignored. You no doubt have an ANSI-standard C compiler on the Paragon. Radiance predates ANSI C, so you should find a way to switch off function prototypes if you want the warnings to go away. Some ANSI compilers offer a -cckr option for "Kernighan and Ritchie" standard, which is the original standard and the one to which Radiance adheres best. Hope this helps. -Greg ========================================================================== GLASS BRICKS Date: Fri, 4 Mar 94 11:20:25 WST From: crones@puffin.curtin.edu.au (Simon "fish" Crone) To: greg@hobbes.lbl.gov Subject: Re: Street Scapes Hi Greg, Thanks for the information. Don't worry about the Ove Arup slide or the San Francisco images, but I would like to see your nighttime roadway simulation. Would you mind putting it up on your ftp server? Also ( I forgot to mention it yesterday ), have you, or do you know of anyone else, who has tried to model a glass block wall? I have basically been modeling each brick individually using a rippled glass texture for the front and back sides of each block and then a white mortar in between. The problem with this is that the inside 'walls' of the glass block, ( ie. the mortar ), appear quite dark ( as they are in shadow ). I don't particulary want to make the far side glass block a light source ( ie illum ) as there are so many glass blocks. Any clues? Simon. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | Simon Crone - Masters Student | | | | School of Architecture, Curtin University of Technology, | | GPO Box U1987, Perth, Western Australia 6001 | | Phone:+61-9-351-7310 Internet:crones@puffin.curtin.edu.au | | | | "He who dies with the most toys wins!" | | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Fri, 4 Mar 94 08:59:12 PST From: greg (Gregory J. Ward) To: crones@puffin.curtin.edu.au Subject: Re: Street Scapes Hi Simon, I just happened to have the roadway simulation in an 8-bit Sun rasterfile (your favorite format), which I compressed and put on my ftp server in the /xfer directory. The file is called "road.ras.Z". As for your glass blocks, what material are you applying to the surfaces? Are you really using the type "glass", or are you using "dielectric" or something else. I would recommend that you use either "glass" or "trans", since these materials have a special property which allows light to pass through them directly even if they have a texture also. Dielectric, though it may be more realistic, does not allow light to pass through without being refracted, and this refraction usually messes up the direct calculation in the way you described. A single glass or trans surface at the front and back (and sides if you like, though you should make sure to leave a small gap between your bricks and your mortar if you do this) should work. -Greg ========================================================================== AMBIENT VALUES Date: Fri, 25 Mar 94 09:23:57 GMT From: jon@esru.strathclyde.ac.uk (jon) To: mdonn@arch.vuw.ac.nz, greg Subject: ambient values query Mike Donn and/or Greg Ward, We are having a debate here over the use of ambient values. The sky file includes a ground ambient and with outside views the inclusion of these values in the "-av" option of rpict seems to work quit nicely. Question is - in an inside view which has ONLY daylighting and direct sun penetration I am assuming that I should include the same -av values. Some of the students complain that this gives high lux values. No, I have not gotten around to pulling across the new release but it is on my todo list. Our desktop/converter e2r scans the sky file and picks up the ground ambient values and uses them in rpict calls unless the user overrides it. Any opinions? Regards, Jon Hand Date: Fri, 25 Mar 94 08:37:25 PST From: greg (Gregory J. Ward) To: jon@esru.strathclyde.ac.uk Subject: Re: ambient values query Cc: mdonn@arch.vuw.ac.nz Hi John, You are correct in using the ambient value suggested by gensky for exterior views, but your students are correct that it is too high for interiors. In fact, setting this parameter correctly is not easy to do automatically. You would have to perform some sort of zonal cavity approximation to find out how much average light (excluding sources) you have bouncing around your interior. The method I've found to be most effective is the one incorporated in the new "rad" program distributed with release 2.3. It takes an exposure value as determined by the user, and computes the ambient value from that. The user is still charged with finding the right exposure, but once that's done, the rest is automatic. (The computation of ambient value from exposure is simply 0.5/exp_mult.) -Greg ========================================================================== LOCK MANAGER From: sjain@arch2.engin.umich.edu Date: Fri, 1 Apr 94 09:24:58 -0500 To: greg@hobbes.lbl.gov Subject: Storing Indirect Illuminance Values in a File Thanks for the advice with the "RAD" program. I am trying to write the indirect illuminance values to a file by using the "-af" option in RPICT. Even though RPICT opens the file with read/write permissions there is the following error: rpict: system - cannot (un)lock ambient file: Invalid argument I am using the following RPICT options rpict -vtv -vp 33 -16 1.76 -vd -11 3 0 -vu 0 0 1 -vh 60 -vv 60 -x 300 -y 300 -pa 1 -ab 1 -af rokko_ab -t 0 rokko2.oct | pfilt > /users/sjain/rokko2.pic Please advice. - Shailesh Date: Fri, 1 Apr 94 09:03:24 PST From: greg (Gregory J. Ward) To: sjain@arch2.engin.umich.edu Subject: Re: Storing Indirect Illuminance Values in a File Dear Shailesh, Sounds like your on a machine with a completely broken lock manager. I noticed this behavior before. There is a file called BUGS in the top directory on the anonymous ftp account of hobbes.lbl.gov. Here's what it has to say: KNOWN BUGS AND WORKAROUNDS Silicon Graphics: The network lock manager seems to be broken on release 5.1.1.1 of IRIX, and therefore rpict, rtrace and rview will not be able to share ambient files while running simultaneously. Also, rpiece will not work. System V derivative UNIX's: The network lock manager in general seems to be unreliable on most SGI's, so rpiece may abort with strange results. I haven't a clue how to fix this, as it appears to be an operating system defect. Please take the time to complain to your vendor if you run into this problem. It's going to take a lot of people coming to them before they're likely to do anything about it. If the renderers give you some message about not being able to lock or unlock a file, you can add the following line to the end of src/common/standard.h as a last resort: #undef F_SETLKW This should remove all the dependent code, killing rpiece in the process. I can only hope they will fix this problem in coming releases. I do encourage you to register a complaint. -Greg ========================================================================== SPECKLE From: "Mr. A. Morris" Subject: radiance query To: greg@hobbes.lbl.gov Date: Mon, 11 Apr 1994 15:32:51 +0100 (BST) Greg I don't know if you remember answering my queries last autumn I've been using radiance for several months now and have found it very useful especially since the addition of the rad function in the latest version. Receintly however I've been having difficulties producing clean images which I know it is capeable. I've enlarged a model which previously produced very good images, increasing the complexity of the model. the model now has several thousand objects and renders well except that round objects inthe scene (which previously rendered giving smooth shading) now have a speckled appearance which is very pronounced. the flat elements all seem to be rendering as normal although as the columns and other round elements are created on autocad they are actually faceted. I've tried rendering these round elements alone and they work fine. Have you got any suggestions? Alex Morris Liverpool Uni. Date: Mon, 11 Apr 94 10:02:58 PDT From: greg (Gregory J. Ward) To: jabt282@liverpool.ac.uk Subject: Re: radiance query Hello Alex, There are three possible sources of the artifacts you are seeing: 1. You are using an outdated release of Radiance where I was using an inaccurate calculation for sphere intersections. This bug has existed since the dawn of Radiance, but only recently became noticeable as people started working on very large models with very small spheres. It was fixed last November, well in time for release 2.3. 2. You are seeing the result of Monte Carlo sampling for the rough specular (i.e. directional-diffuse) component, which can show up as speckles under certain circumstances. You can mitigate this effect using the new -m option of pfilt, or by increasing -st above the specularity of your surface or decreasing the -sj parameter. The latter two approaches diminish image accuracy, however. 3. You have answered "yes" to the question about rendering huge models during the Radiance makeall build, thus causing -DSMLFLT to appear in the rmake script used to compile the programs. As it warned you, there are sometimes errors in the intersection calculations that result from using this option, and as a general rule I don't recommend it. I have since taken out this question, so the next release will have to be hacked to compile with short (4-byte) floats. To go back to double-precision data values, run "makeall clean" followed by removing the rmake command from your Radiance executable directory (/usr/local/bin by default), followed by "makeall install". This time, answer "no" when it asks you if you plan to render huge models. Hope this works. -Greg ========================================================================== OBJVIEW Date: Fri, 15 Apr 1994 09:40:04 -0600 (MDT) From: Ric Wilson Subject: Radiance question To: GJWard@lbl.gov This is probably a stupid question, but how do render the sample .oct scenes in the radiance package? I get the warning that there is no light source and then a black picture. Any help would be greatly appreciated. Ric Wilson rwilson@boi.hp.com Date: Fri, 15 Apr 94 09:16:50 PDT From: greg (Gregory J. Ward) To: rwilson@boi.hp.com Subject: Re: Radiance question Hi Ric, The sample octrees are meant to be included in other scenes as instances using the "instance" primitive type. If you want to render them by themselves, I suggest you use the following rather tortured command line: % objview '\!echo void instance example 1 example.oct 0 0' You may do this from any directory and it should work so long as your RAYPATH variable is set to search the location containing the actual octree ("example.oct" in this example). Objview is a generaly useful script that adds a few light sources and a background to a scene or object, puts it in an octree and starts rview on it. The source to this script may be found in ray/src/util/objview.csh. There is a manual page, also, but it doesn't say much. -Greg P.S. A question is only as stupid as its answer. ========================================================================== CSG From: cloister bell Sender: cloister bell Reply-To: cloister bell Subject: complex objects in radiance. To: greg@pink.lbl.gov hi there. i've been using radiance for a month or so now, and thought i'd offer some comments and ask a question. overall, i'm terribly impressed with radiance. it does lighting so well that i'm not seriously tempted to switch back to anything like povray or rayshade. but there are some things which were a lot easier in pov and rayshade that are giving me fits in radiance: 1. there doesn't seem to be any comprehensive documentation source for all the math that radiance seems capable of. for example, i'm constantly seeing hermite(....) being used in scene description files, but it took forever to find out (by looking in rayinit.cal) that it was a library function. and i wasn't able to find out anything more than that. 2. the documentation for surface property definition/declaration is pretty sparse. most of what i've been able to do i had to learn by looking at other people's sample files. while this works, no one ever bothers to comment their example files it, which means i'm left with a lot to guesswork and hocus pocus in my files. 3. not being able to group objects into larger objects for composite solid geometry is making life hard. it wouldn't be a problem except that intersecting antimatter volumes create problems. for example, short of using heightfields (which i really want to avoid for performance reasons), i can't think of a way of making the 90 degree fillets at an inside corner of a cube meet smoothly. the obvious thing to is to use 3 mutually orthogonal antimatter cylinders and a sphere such that the center of the sphere is also one of the endpoints of each cylinder. of course, the overlapping antimatter makes big black spots on my image. i tried just using an antimatter rounded box from genbox, but of course that didn't do what i wanted at all. am i missing something really obvious? i hope that the problems i'm having have simple solutions that have already been explained in some document that i managed not to get. but if that isn't the case, could you give me some hints? i anticipate doing a lot more work with radiance, and any advice you could give that would make my life easier would be greatly appreciated. thanks, jason. Date: Fri, 15 Apr 94 19:43:25 PDT From: greg (Gregory J. Ward) To: cloister@u.washington.edu Subject: Re: complex objects in radiance. Hi Jason, The only other documentation you might be missing is the User's Manual in the mac.sit.hqx file in the ray/doc directory. You'll need a Macintosh with StuffIt and Microsoft Word to print it out, but there's a lot of good examples and information in there. It's a bit out of date, and not 100% error-free, but it's all we have at the moment. I've been trying to interest a publisher in sponsoring a book on Radiance, but so far I've had no luck. The documentation is lamentable, but there's been consistently zero funding from the Department of Energy to do anything about it. The problems with antimatter are numerous, and I generally don't use it myself. Complicated surfaces are best represented as (smoothed) polygons, which can be produced nicely by the gensurf(1) or genrev(1) programs. The additional cost in rendering time is much less than you'd expect, thanks to the octree acceleration techniques employed in Radiance. In fact, the only reason I created antimatter was to cut windows out of walls and the like for CAD programs that work that way. Also, if you absolutely must have an accurate sphere with a section missing, antimatter is the only way in Radiance. I can't say for certain, but I have a hunch that the additional intersections and overhead associated with CSG would outweigh any advantage you might get from simplifying the geometric model in this way. Also, for the scenes Radiance is usually used to represent (architectural lighting), CSG is not really necessary. Are you really doing all this nasty geometry in the scene file itself? I thought I was the only one stubborn enough to do that... On the bright side, I've almost finished a Wavefront .obj to Radiance file translator, which might make a few things easier. -Greg Date: Fri, 15 Apr 1994 22:44:07 -0700 (PDT) From: cloister bell Subject: Re: complex objects in radiance. To: "Gregory J. Ward" > The only other documentation you might be missing is the User's Manual in > the mac.sit.hqx file in the ray/doc directory. ok. i'll check it out. > Are you really doing all this nasty geometry in the scene file itself? > I thought I was the only one stubborn enough to do that... yep. i do better with doing the geometry in my head and on paper than in trying to use front end modelers > On the bright side, I've almost finished a Wavefront .obj to Radiance > file translator, which might make a few things easier. cool. thanks. jason. ========================================================================== RAD PROBLEMS [The conversation starts with my thanking Veronika Summeraur for her work on the HTML manual pages. Her questions on rad follow in her response.] Date: Wed, 13 Apr 94 11:49:23 PDT From: greg (Gregory J. Ward) To: summer@arch.ethz.ch Subject: man pages Hi Veronika, Thank you so much for translating the Radiance 2.3 manual pages to HTML format. They look GREAT! Raphael told me about them, and I just loaded them on our WWW server (ftp://hobbes.lbl.gov/www/radiance/radiance.html). How did you do it? There was obviously a fair amount of work just in specifying the links and everything, but you surely must have a program of some sort to convert nroff-formatted text into HTML. Anyway, the results are terrific! Thanks a million! -Greg Date: Sun, 17 Apr 94 06:06:13 +0200 From: summerauer veronika To: greg@hobbes.lbl.gov Subject: Re: man pages hi Greg, thanks for your mail - it's really good to hear that you liked this html-stuff i did. > How did you do it? There was obviously a fair amount of work just in > specifying the links and everything, but you surely must have a program > of some sort to convert nroff-formatted text into HTML. the tutorial and the reference manual i did 'by hand' i.e. using a vi i have customized with a lot of macros, which make the insertion of html- tags quite comfortable and fast (though very cryptic ;-). for the manual pages i modified one of the man2html Perl scripts that are availabe on the Net, but still had to do some editing by hand (since i have no experience in programming perl, and the scripts did not meet all my needs regarding formatting). as we do not have a www server at our site (i cannot use search), i worked on your digests as well - inserting named anchors for each topic, and created a 'list of keywords' document (currrently it is sorted alphabetically, maybe i'll add one sorted by topics...), which allows you to access the digests by topic, not 'by date/version'. if you are interested, i can upload those files to hobbes. i'd like to add some questions on Radiance to this mail - i had planned to ask you before but never got to actually write an e-mail. maybe you remember - i am (still) working on an AutoCAD extension program, which is used as a graphical interface to Radiance. currently i'm updating the first version which has been used by students here during the last two months. i think the results were pretty good (though the interface is not yet very stable and has lots of bugs) - you might ask Florian (wenz@arch.ethz.ch) in case you would like to see some of the final images. everyone who started to work with Radiance here is very enthusiastic about your software and the possibilites it offers to examine and visualize architectural projects. my questions are related to the 'rad' program, which i use to start the rendering process after the AutoCAD user has selected the preferred settings for sun, sky, ambient light, viewpoint, quality ... 1) oconv: i have noticed that most of the times the rad-program executes the oconv command twice. i think that this relates to the way the sun-definition is specified (using !gensky [options] in a seperate scene file, with one of the +s, -s, -c and -u options besides the specification of location and date+time.) (playing around with !gensurf i watched 'rad' rebuiling the octree three (!) times before starting rpict or rview - that's why i think it is related to !gen[something].) when the size of the octree is rather big (2 to 5 Mbyte), this slows down the process a lot, especially if the user chooses the option for 'preview' (-> rview). can you give me any clues on when and why the oconv program is applied more then once? 2) scene= /object= the translation of an AutoCAD drawing/scene is organized according to entity color or layer structure. This may result in many different scene files (one per color/layer), which 'rad' takes as arguments for the oconv command. i repeatedly had the problem that the length of the oconv command exceeded a limit of maximal string length (?), i guess that comes from the bourne shell or csh (i use both). i tried to work around the problem by maintaining a scene master file, (!cat filename.rad for each seperate scene file), and specifiying the scene files as objects, but i still have the same problem if i use 'rad' to create a complete rif-file (-n and -e option), as 'rad' places all objects on one line, which again exceeds a builtin limit. the result is that the rif-file is incomplete and cannot be used again with 'rad'. currently i no longer specify the scene files as scene= or objects= for 'rad', but try from inside AutoCAD to keep the scene master file as up-to-date as possible. i know that i could use 'make' as an intermediate step to create/rebuild the octree but i am still looking for a solution with 'rad'. ( BTW - we am working with Sun sparc2 / SunOS 4.1.3 and SGI Indigo / IRIX 4.0.5H ) 3) ambientfiles what exactly are the parameters which are considered by 'rad' whether to remove an already existing ambientfile or to re-use it? (material definitions are kept in a seperate file, specified as materials= project.mat in project.rif) thanks in advance for any advice you can give. veronika. Date: Sun, 17 Apr 94 10:15:15 PDT From: greg (Gregory J. Ward) To: summer@arch.ethz.ch Subject: Re: man pages Hi Veronika, Yes, I would like very much to get the index you made for the Radiance Digests. I will add it to my WWW server's digest page. Please drop it off in the /xfer directory on hobbes.lbl.gov. Thanks! Regarding your interface work, I would be very interested to see it at some point. We may be embarking on a project with the U.S. Federal Aviation Administration (which controls air traffic in our country), writing a graphical user interface for Radiance, possibly linking it to AutoCAD. Anyway, I will try my best to answer your three questions: 1) In fact, rad does call the oconv() routine in two or three places, and only the first call should cause the octree to be rebuilt. The reason is that oconv() checks the file modification times, and only rebuilds the octree if the current one is out of date with respect to one or more of the scene= or objects= files. What I suspect is happening in your case is that you are on a network of machines that disagree about what time it is, and NFS doesn't resolve these differences. Rad runs oconv() the first time, creating an up-to-date octree, then calls it a little while later and somehow the the system still says the octree is old. The best way to solve this problem is to run the network time daemon on your networked machines, so they can all agree about what time it is. A secondary option is to manually synchronize your machines' clocks and do this periodically as inaccuracies cause them to drift. I will check my code again to see if there is something I can do to circumvent the problem within rad. 2) This is a problem I noticed as well, and I have put in a fix for the next release. In the meantime, the solution you have come up with seems adequate. There is a limit built into oconv as to the maximum number of object files as well (63 according to my version of oconv.c), so fixing the -e option of rad as I have done isn't a complete solution to the problem. Beyond that, there is a limit to the length of a UNIX command line, though I believe it is quite generous (on the order of 10k). 3) Ambient files are removed by rad whenever there is a change to any scene or object file that causes the octree to be rebuilt, or if there is a change to a materials file. Basically, if anything about a scene changes, the ambient file is no longer reliably valid. Thank you again for all the great work you have done! -Greg ~s Radiance Digest v2n7 Dear Radiance Users, It has been several months since the last digest was posted, and in that time there have been a number of developments. First, I would like to mention that a paper was published on Radiance in the 1994 issue of Computer Graphics (the annual Siggraph proceedings), which contains a lot of useful information on the history and algorithms of this package. Also, the CD-ROM version of the proceedings has an HTML version of the paper, as well as the 2.4 source code distribution. The HTML version is also linked to on our new Radiance Web page, which is: ftp://hobbes.lbl.gov/www/radiance/radiance.html There you will find many other goodies, such as online documentation and images. If you have a Web site with Radiance-related pages, please write to me so I can link it in to our page. Another announcement I would like to make is the start of an unmoderated discussion group for Radiance users, which will be accessible from the alias: radiance-discuss@hobbes.lbl.gov If you wish to subscribe (or unsubscribe) to this group, please send your name and most permanent e-mail address to: radiance-request@hobbes.lbl.gov Please do not mail the general group with administrative requests! This has happened in the past, and it is nothing but a pain to everyone. Mail sent to radiance-request currently gets forwarded to me. If you are on the moderated mailing list (which you must be if you just received this digest), that does not automatically place you on the discussion group list. You must subscribe separately. If you wish to unsubscribe from the moderated group, please send a message to radiance-request to that effect. One last thing about the discussion group list -- since it is a simple mailing list and my mailer's not too bright, you may get some bounced mail when you post to this group. You should either ignore such messages or forward them to radiance-request so that I may update the list. Please, do not complain to the discussion group, because the problem members won't even get your complaints, and everyone else will tire of reading them! Bounced mail is usually caused by out-of-date e-mail addresses, which is why it's important to give me the most permanent address you can on your initial subscription, and inform me (at radiance-request) whenever your address changes. Finally, I may submit a Siggraph course proposal on Radiance, and would like to know how many people would sign up for such a course, and whether they'd prefer a half-day or full-day adventure. Please write back to me (at GJWard@lbl.gov) with your opinions. Without further ado, here are the topics covered in this digest: NOISE_FUNCTIONS - Fractal vs. Perlin noise functions DAYLIGHTING - Various daylight-related topics PATTERNS - Mapping patterns onto surfaces AMBIENT_FILES - Using ambient files VIEW_ANGLES - Computing -vh and -vv parameters INTERACTIVE_WALKTHROUGHS - Generating interactive walkthroughs SPECTRAL_COLORS - Multi-spectral sampling X11_ERROR - Rview "cannot open command line..." TRANS_MATERIAL - Setting parameters for "trans" type PARTICIPATING_MEDIA - Smoke and fog simulation MATERIALS - Several related materials questions Enjoy! -Greg ========================================================================= NOISE_FUNCTIONS To: greg@hobbes.lbl.gov Subject: quick jiggle questions Date: Wed, 27 Apr 94 11:06:47 EDT From: Philip Thompson Hi Greg, What is the difference between jigglepic_u and fjigglepic_u in jigglepic.cal? or fnoise3() and noise3() for that matter? (is it fast or floating point?) Thanks, Philip Date: Wed, 27 Apr 94 09:31:54 PDT From: greg (Gregory J. Ward) To: phils@MIT.EDU Subject: Re: quick jiggle questions Hi Philip, The difference is that noise3(x,y,z) is the Perlin noise function and fnoise3(x,y,z) is a fractal noise function with otherwise similar characteristics. Ideally, you would create such a function using a summation of Perlin functions with a 1/f frequency spectrum, but this is rather expensive in practice so I have written a special version. Fractal noise usually does a better job mimicking natural phenomena. -Greg ======================================================================== DAYLIGHTING Date: 27 Apr 1994 12:03:12 -0400 (EDT) From: "Richard G. Mistrick" Subject: rtrace for daylighting To: greg@hobbes.lbl.gov Greg: We are attempting to use rtrace to analyze daylighting in empty rooms. We have four walls and a window as defined in the example included in the manual. We are interested in illuminance on the work plane as well as at partially shielded and unshielded photocells. The photocells are modelled with a small obstruction surrounding a point to block the view in a certain direction. Our main problem is one of computation time when we attempt to perform accurate rtrace runs (as indicated in the "SETTING RENDERING OPTIONS" table that you sent to me. In a situation such as this, how can we best gain speed without losing accuracy. I assume that it is best to manipulate the -a? commands. Room size that we are planning to model are 15x15 ft and 30x60 ft. I am also concerned that we achieve an accurate analysis of light arriving at the shielded photocell point. Can you give us some advice on what parameters will most affect speed and accuracy by describing how -aa, -ar, -ad and -as affect what is occuring in the computation of the interreflected component. A simple description of what Radiance is doing and how these values affect the process would be helpful. A calculation at only one point is running for 20 hours on our system (SGI Idigo/Iris workstation) using the "Accurate" input parameters (-ab 3) and we are looking for a way to reduce this time. Thanks in advance for your help. Regards, Rick Mistrick Date: Wed, 27 Apr 94 09:44:12 PDT From: greg (Gregory J. Ward) To: RGMARC@ENGR.PSU.EDU Subject: Re: rtrace for daylighting Hi Rick, In an empty room, many of the options may be relaxed without significant loss in accuracy. I recommend you do the following: 1. Make sure your window is an illum, as described in the tutorial. 2. Reduce the ambient divisions (-ad) to 256. 3. Reduce the ambient super-samples (-as) to 0. 4. Reduce the ambient resolution (-ar) to 16 if you have no groundplane, or the groundplane size divided by your maximum room dimension divided by 16 otherwise. 5. Increase the ambient accuracy (-aa) to 0.2. 6. Set the direct jitter (-dj) to 0.7. 7. Set the direct substructuring (-ds) to 0.2. 8. Set the ambient value (-av) to something sensible, probably around .3 .3 .3 in your situation. Even with your current settings, I am a bit surprised that the calculation is taking so long. Do you have any external structures modeled? Is your window a light source? Are you really just calculating a single point using rtrace -I? Perhaps you can send me your model. -Greg Date: Fri, 06 May 1994 12:55:21 -0400 To: greg@hobbes.lbl.gov (Gregory J. Ward) From: stuart@archsun.arch.gatech.edu (Stuart Lewis) Subject: Glazing Light Loss Factors Hi Greg, Thanks again for your help with my last set of questions; I now have another one for you! I am in the process of writing a little daylighting program for Architecture students based on the Lumen Method for Daylighting (IESNA RP-23) and am seriously considering using Radiance to develop some data that will let us add correlation factors for non-standard (ie not 0.50!) wall reflectivities. We thought it would be very useful for the students to be able to visualize (and quantify!) the relationship between surface color and interior illumination. I have gotten pretty good agreement between the manual results and Radiance, but ran into a question which seems to _maybe_ have wider relevance: How would you handle the Light Loss Factor ("accounting for dirt accumulation") of the glazing? Initially, I simply reduced the workplane illumination levels by that amount (as the IES does), but it seems to me that dirt accumulation might actually affect the behavior of a view-preserving window (causing it to become somewhat diffuse.) Or, do you think it would be adequate to simply reduce the transmittance of the glazing by that amount? I haven't seen anything in the literature we have available that would clue me in. Since we began using Radiance for physical modelling, it seems this question is more relevant in that context than in reproducing the reults from RP-23 (adding this feature to our program is an unanticipated bonus!) I'm thinking for example about clerestory windows or toplighting, where dirt accumulation can become quite significant. ------ Also, we haven't received any of the requested information on WINDOW or DOE that you were going to pass along. If you don't mind, would you please pass along that request again? This fax number may be more reliable: 404-458-4090 (Max Akridge's home!) Thanks a lot, Stuart [Stuart Lewis, GRA College of Architecture Georgia Institute of Technology Atlanta, GA 30332] Date: Fri, 6 May 94 10:32:17 PDT From: greg (Gregory J. Ward) To: stuart@archsun.arch.gatech.edu Subject: Re: Glazing Light Loss Factors Hi Stuart, I have forwarded your request again as a reminder to those who manage DOE-2 and SUPERLITE. I hope this time they will respond. I must admit I haven't given much thought to dirt accumulation on windows. Certainly, it is possible to model this with Radiance, using either the BRTDfunc type, transdata, transfunc, mixfunc with trans and glass, or trans alone (in order of increasing simplicity and decreasing generality). Since I have no data on how dirt affects the transmission of glazing, I am left to guess. My guess is that it adds some diffuse component, which may be a function of incident angle. What are the common ranges and how important is incident angle are two questions I could only answer by taking measurements. I have a device that could be used to such a purpose, but I have not calibrated it yet for transmission (only reflection). All the windows experts seem to be gone at the moment -- maybe there's a staff meeting or something. Anyway, I'll ask them when they come back and write again if I get any good research pointers for you. -Greg From: rcl@scs.leeds.ac.uk Date: Thu, 18 Aug 94 13:56:49 BST To: GJWard@lbl.gov Subject: Radiance - daylight I am fairly new to the radiance package so please forgive my ignorance in such matters... 1) You state in digest v2n4 that EXPOSURE = K * T * S / f^2, where T = exposure time (in seconds), S = film speed (ISO), f = f-stop, K = 2.81 (conversion factor 179*PI/200) and that this is approximate for 35mm photography. Is the 35mm assumption important for this equation or is the equation true for, say, CCD cameras (which I am trying to model)? This assumes of course that I can equate the responsivity of a CCD array with a certain ISO film speed. 2) I am trying to get an accurate representation of a scene lit purely by skylight (generated either by gensky or gendaylit). I have defined my sky and ground as follows (adapted from the tutorial); (Note I am only viewing the outsides of buildings) !gensky 7 15 12 -a 54 -o 2 -m 0 -g .2 skyfunc glow ground_glow 0 0 4 1.6 .8 .25 0 ground_glow source ground 0 0 4 0 0 -1 180 skyfunc glow sky_glow 0 0 4 .8 .8 1 0 sky_glow source sky 0 0 4 0 0 1 180 I then define a finite 'ground plane' on which my buildings stand and cast any shadows onto. Does having the ground defined as a glowing hemisphere provide a realistic representation? I have tried defining the ground as large disc but I always end up gaving a black gap in the horizon. I am more interested in modelling reality than what may 'look' correct to the viewer. 3) When I have the above as my only source of lighting rpict warns me that I have no light sources. Is this correct? What am I doing wrong to generate such an error? Thanks in advance for any help you can provide, and I apologise again for anything fundamental that I may have missed in the documentation. Rob Love School of Computer Studies University of Leeds Leeds England Date: Thu, 18 Aug 94 11:51:56 PDT From: greg (Gregory J. Ward) To: rcl@scs.leeds.ac.uk Subject: Re: Radiance - daylight Hi Rob, In answer to your questions: 1) The exposure suggested is only if you wish to reproduce a photographic image. I don't know how to correlate ISO to CCD sensitivity, and even if I did, most cameras have such things as automatic irises and automatic gain that make any efforts to pin down the exposure futile. The main question is, "why do you want to do this?" Are you really trying to reproduce camera output, or are you simply trying to set the right exposure? If it's the latter, then there are much better ways. 2) Your sky representation is fine. If there are no significant shadows in your exterior environment cast by large structures, then the distant ground approximation is adequate. If there are significant shadows, then simply ADD a groundplane to this description. Do not remove your distant ground approximation, or you will get that dark horizon problem. 3) I do not know why the sky you showed me would give you the "no light sources found" warning. That should only happen if you use a cloudy sky or other description w/o a sun. Anyway, this is merely a warning, and indicates that without an indirect calculation (i.e. -ab >= 1), you will have no illumination at all. Hope this helps. -Greg From: rcl@scs.leeds.ac.uk Date: Thu, 18 Aug 94 20:46:28 BST To: greg@hobbes.lbl.gov Subject: Re: Radiance - daylight I am actually working in computer vision and I'm using Radiance to generate some test sequences of the sun moving over some buildings during a complete day. Eventually I aim to be using a CCD camera to capture these sequences but I am using Radiance at the moment as I have complete control over the environment (it's always overcast in Leeds), the images are noise free, and it's very accurate. I would ideally like Radiance to give me images that would be the same/similar to what I expect to get from a CCD camera. Since I am dealing with varying lighting my CCD camera will have gain control, gamma correction, iris control, etc turned off. This enables me to compare images taken at different times of the day without the camera attenuating for lack of light. So, ideally I'd like to be able to set Radiance's exposure so that it mimicks a certain setting on my camera (f-value, shutter speed, CCD responsivity) and seeing your equation for film based photography gave me some hope. Equating CCD sensitivity to film speed is my problem but I was unsure if the 35mm asumption in the equation was necessary - would the equation still hold if I was using a 120 size film (for example)? I am certainly not concerned with setting the exposure so it 'looks' right. It's what what the computer will see that is important. Many thanks for your prompt reply Rob Date: Thu, 18 Aug 94 13:07:05 PDT From: greg (Gregory J. Ward) To: rcl@scs.leeds.ac.uk Subject: Re: Radiance - daylight Hi Rob, All you need do is calibrate your camera against your renderings somehow. You can do this if you look at an evenly illuminated surface and adjust the speed/f-stop to a good exposure setting. Capture an image, then the captured values (average) will tell you the relationship between illuminance and image for these settings. (I forgot to mention that you must measure the light level with a luminance or lux meter.) For Radiance, the exposure is simply a multiplier between radiance (luminance/179) and pixel value. The formula for photography applies to your camera, in the sense that exposure is inversly related to the square of f-stop, and speed is of course a linear relationship. -Greg ========================================================================= PATTERNS From: "Mr. M.J. Lupton" Subject: Patterns from tiff files To: greg@hobbes.lbl.gov Date: Fri, 29 Apr 1994 10:42:10 +0100 (BST) Greg I have just ftp'd some textures from ccu1.auckland.ac.nz in tiff format. I then used ra_tiff with the -r option to convert them to pic format, but when I come to use the pic file as a texture using this arrangement of commands: void colorpict blockwork_pat 9 red green blue block1.pic picture.cal tile_u tile_v -s .5 0 1 1 blockwork_pat plastic concrete_blocks 0 0 5 .55 .55 .5 0 0 !genbox concrete_blocks northwall 9 0.06 6 | xform -t 1 15 .02 the pattern only appears to be on one face of the box? What am I doing wrong Martin Lupton. Liverpool University. Date: Fri, 29 Apr 94 09:13:42 PDT From: greg (Gregory J. Ward) To: sk8@liverpool.ac.uk Subject: Re: Patterns from tiff files Hi Martin, Radiance doesn't automatically rotate a pattern onto various faces of a solid, at least not with the tile_u and tile_v variables defined in picture.cal. You either have to rotate the pattern to make 3 types for three surface orientations (XY, YZ and XZ planes), or modify the U and V variables defined in rayinit.cal to do it for you. Something like the following should work: void colorpict blockwork_pat 9 red green blue block1.pic . frac(U) frac(V) -s .5 0 0 blockwork_pat plastic concrete_blocks 0 0 5 .55 .55 .5 0 0 !genbox concrete_blocks northwall 9 0.06 6 | xform -t 1 15 .02 -Greg ========================================================================= AMBIENT_FILES Date: Tue, 03 May 1994 14:48:25 +1200 (NZT) From: mat@ccu1.auckland.ac.nz (M Carr) Subject: ambient files To: GJWard@lbl.gov Hi Greg Just a pair of questions If I am rendering two views of a single scene at once, and they are both using the same ambient file, will I have any problems? What exactly are ambient files? Being view independant, they still change quite dramatically between views (hence my concern above), ie they grow. Mat ______________________________________________________________________________ Matiu Carr School of Architecture Property and Planning University of Auckland New Zealand email: m.carr@auckland.ac.nz Date: Thu, 5 May 94 22:48:08 PDT From: greg (Gregory J. Ward) To: mat@ccu1.auckland.ac.nz Subject: Re: ambient files Hi Mat, The answer is that you needn't worry. The values will be shared to the extent possible in the two renderings. Especially if you are doing more than one bounce (-ab 2 or greater), many values will be common and you WILL save a lot of time. It is true that the file will continue to grow, but that's the price you pay for speed. And, if you're using 2.3 or later, you can have as many rpict processes sharing the same ambient file on as many machines as you like. Provided you have a working NFS lock manager (which is by no means certain), the file will be updated in a consistent fashion and values will be shared among processes. Read the document in /pub/doc/parallel.txt on hobbes.lbl.gov for more details. If you have Mosaic, you can access these through our HTML pages starting at "ftp://hobbes.lbl.gov/www/radiance/radiance.html". -Greg ========================================================================= VIEW_ANGLES Date: Mon, 16 May 1994 16:23:28 +1200 (NZT) From: mat@ccu1.auckland.ac.nz (M Carr) Subject: rpict -vh -vv To: GJWard@lbl.gov Hi Greg Is there a formula for working out what a particluar pair of -vh -vv settings will produce in terms of the final image dimensions, there does not seem to be a linear correspondence. I am just using the vtv view setting. It's not urgent. Thanks Mat ______________________________________________________________________________ Matiu Carr Date: Mon, 16 May 94 12:28:22 PDT From: greg (Gregory J. Ward) To: mat@ccu1.auckland.ac.nz Subject: Re: rpict -vh -vv Hi Mat, The relationship between perspective view angles and image size is determined by tangents, i.e.: tan(vh/2)/tan(vv/2) == hres/vres Note that the angles must be divided in half (and expressed in radians if you use the standard library functions). If you know what horizontal and vertical resolution you want, and you know what horizontal view angle you want (and your pixels are square), you can compute the corresponding vertical view angle like so: % calc hres = 1024 vres = 676 vh = 40 vv = 180/PI*2 * atan(tan(vh*PI/180/2)*vres/hres) vv (resp) $1=27.0215022 Thus, -vh 40 -vv 27 -x 1024 -y 676 should result in (almost) no adjustment of the horizontal or vertical resolutions by rpict. If you had just taken a rough guess of the vertical view angle, rpict would shrink the horizontal or vertical image size to insure that the pixels were square. Hope this helps. -Greg ========================================================================= INTERACTIVE_WALKTHROUGHS From: lpostner@cs.clemson.edu Date: Thu, 19 May 94 13:19:55 EDT To: gjward@lbl.gov I have recently installed Radiance2R4 on a system of SGI Indys 5.2. I was interested in rendering 3D models and walking around them interactively for use with VR helmets. Is there a way of doing this with Radiance? If not how easy would it be to write an interactive viewer? I am not interested in the raytracing aspect, but rather the radiosity part of Radiance. Anny and all suggestions would be greatly appreciated. Thanks. Lori Postner Dept. of Computer Science Clemson University email: lpostner@cs.clemson.edu Date: Thu, 19 May 94 10:26:51 PDT From: greg (Gregory J. Ward) To: lpostner@cs.clemson.edu Subject: VR and Radiance Hi Lori, There is no program that comes with Radiance for interactive walk-throughs. In fact, I think this problem is too difficult to solve interactively in the general case. The best you can do is a diffuse approximation, as used in radiosity-type programs. The one way to get what you want with Radiance is on a SGI Reality Engine or some such that does real-time textures, then use Radiance to compute illumination maps. I haven't done this myself, but I know some folks in Zurich who have if you need some more pointers. -Greg From: lpostner@cs.clemson.edu Subject: Re: VR and Radiance To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Thu, 19 May 94 13:53:02 EDT How do I get Radiance to dump me a 3D data file? Also, please send me the addresses of the people in Zurich, perhaps they will have some good suggestions. Thanks Lori Postner Department of Computer Science Clemson University email lpostner@cs.clemson.edu Date: Thu, 19 May 94 11:05:05 PDT From: greg (Gregory J. Ward) To: lpostner@cs.clemson.edu Subject: Re: VR and Radiance Hi Lori, The fellow who has done this is Daniel Lucius, and his address is . You cannot dump full 3-d illumination data from Radiance. It is necessary instead to compute illumination maps individually for each surface. From my limited experiments, it seems most efficient to compute the irradiance using a parallel view of each wall, floor and ceiling, then combine this with the appropriate colors and textures during rendering. For example, let's say you have the following wall: wall_mat polygon east_wall 0 0 12 10 0 0 10 0 3 10 7 3 10 7 0 You would run rpict like so to compute an illumination map with a resolution of 0.1 units (meters?): rpict -i -vtl -vp 9.99 3.5 1.5 -vd 1 0 0 -vh 7 -vv 3 -vu 0 0 1 \ -ab 1 [etc...] -x 70 -y 30 octree > east_wall.pic The -i option tells rpict to compute irradiance instead of radiance. You will have to adjust the brightness and convert the output picture into the appropriate image format (using fromrad or something), then write a GL program to use it, along with the other maps. -Greg ========================================================================= SPECTRAL_COLORS From: Carlos F. Borges Subject: Radiance question To: GJWard@lbl.gov Date: Mon, 8 Aug 94 14:59:49 PDT Dear Greg, I enjoyed your talk in Orlando very much. I am interested in using the Radiance package to try different color representation approximation methods and was] wondering what is the color modelling methodology of Radiance. Do you use, or allow, full spectral description of lights and reflectances (transmittances) or is some kind of RGB method used. Is it possible to change the manner in which color computations (independent of scene geometry) are done? Where can I find out more on this subject before I start trying to use the package. I am trying to find a system that allows me to change the underlying color modelling to see how well different simplification schemes work (like the one in my 1991 SIGGRAPH paper, or different spectral sampling approaches). My email address is: borges@waylon.math.nps.navy.mil -- Cheers, Carlos Date: Mon, 8 Aug 94 15:23:34 PDT From: greg (Gregory J. Ward) To: borges@waylon.math.nps.navy.mil Subject: Radiance color rep. Hi Carlos, Unfortunately, I was rather stupid in my initial use of color in Radiance, and now I'm more or less stuck with a 3-sample representation. These normally correspond to RGB, though you can define them to be whatever you like (e.g. XYZ). Nevertheless, certain desperate people have used Radiance to compute images with more spectral samples by rendering the same image several times, changing only the materials. This is of course not the most efficient route to take, but it does work if nothing else is available. It would be possible to alter the source code to handle more spectral samples, but there are several places where 3 samples are assumed. (Most of the code is in the form of macros, which can be easily changed in contrast.) The main reason I have still not generalized the code is because it would mess up the input format rather badly. I may yet think of a way around this problem, but until I have a greater need and a little time to do it... If you want to use a scene description language that provides full color flexibility, why don't you investigate the MGF format described on our anonymous ftp site (hobbes.lbl.gov) in the /www/mgf directory? There you will also find HTML documents for Mosaic. (i.e. try the URL: "ftp://hobbes.lbl.gov/www/mgf/HOME.html".) -Greg ========================================================================= X11_ERROR Date: Tue, 16 Aug 1994 10:30:19 -0700 (MST) From: AGMXR@acvax.inre.asu.edu Subject: Help with rview - radiance utility To: GJWard@lbl.gov Hi, I have i9nstalled Linux on one of the Pentium PC and compiled radiance on it. When I tried to display the 'oct' using rview, it gives me a message "rview: cannot open command line window". Can you help to resolve this problem. Thankyou very much. Muthu AGMXR@ACVAX.INRE.ASU.EDU Date: Tue, 16 Aug 94 10:46:34 PDT From: greg (Gregory J. Ward) To: AGMXR@acvax.inre.asu.edu Hi Muthu, The most probable cause of the command line window not opening is that the X11 driver is not finding the default text font, "8x13". You can modify the x11.c file in the ray/src/rt directory to use a font you DO have, or figure out some way of making this font available. One way to do this is in your X11 fonts directory, there should be a file called "fonts.alias", in which you may add an entry for 8x13, and alias it to another font that you do have. Hope this helps. -Greg ========================================================================= TRANS_MATERIAL Date: Thu, 1 Sep 1994 05:08:22 -0700 From: COURRET Gilles To: greg@hobbes.lbl.gov Subject: translucent material Hi greg, I am working on zenithal opening Radiance simulation, and espacially on translucent plastic glazing. Can you confirm that : " void trans opale 0 0 7 .3 .3 .3 0 0 .6 .1 " is effectively a grey translucent material which has a transmission fraction of 60% with a specular component of 10%. Thanks in advance, Yours, Gilles Date: Thu, 1 Sep 94 13:33:55 PDT From: greg (Gregory J. Ward) To: courret@divsun.unige.ch Subject: Re: translucent material Hi Gilles, So I see that you get confused by the trans type, just as I do! If what you want is no reflection from the surface whatsoever, and 60% total transmission, 10% of which is specular (leaving 50% diffuse), you should use: void trans opale 0 0 7 .6 .6 .6 0 0 1 .1666 I hope this is what you're after. What you gave me was: void trans opale 0 0 7 .3 .3 .3 0 0 .6 .1 which would have 12% diffuse reflectance ((1-.6)*.3), 16.2% diffuse transmittance (.3*.6*(1-.1)), and 1.8% specular transmittance (.3*.6*.1). I'm sorry that this type is so confusing. I get horribly confused by it myself... -Greg Date: Fri, 2 Sep 1994 04:55:26 -0700 From: COURRET Gilles To: "(Gregory J. Ward)" Subject: Re: translucent material Hi greg Thanks for your fast reply. With the new material definition you gave me the results are much more realistic! The daylight factor is much higher. But i have the feeling that it is a little bit to high. Are you sur about trans=1 for a transmission of 60% ? Yours, Gilles Date: Fri, 2 Sep 94 09:09:31 PDT From: greg (Gregory J. Ward) To: courret@divsun.unige.ch Subject: Re: translucent material Hi Gilles, Yes, I am sure about trans=1 for transmission of 60%, because the color was set to .6 .6 .6, which both transmitted components (specular and diffuse) are multiplied by. -Greg ========================================================================= PARTICIPATING_MEDIA Date: Tue, 6 Sep 94 13:59:57 -0400 From: randal.sims@srs.gov (Randy Sims) To: GJWard@lbl.gov Subject: Radiance vs. Participating Medium I am pursuing computations of global illumination in the presence of participating media. The assumption is that absorption, emission and scattering events in the environment result in a radiance along any direction that changes with the position along that direction. Such media might include soot, dust, smoke, fog, etc. Past treatments of such phenomena include Rushmeier and Torrance's extended radiosity methods (Siggraph'87) and direct Monte Carlo simulations. But, other than my interests, I'm sure you are well aware of these issues, phenomena and treatments. I have browsed the Radiance digests and find little discussion of such effects. Is there any work and/or research with Radiance in this area? Can Radiance be extended to address these phenomena? Do you (individually and/or collectively -- the Radiance community) have an interest in addressing participating media? Randal N. Sims (Randy) Westinghouse Savannah River Co. Savannah River Site 773-42A, 129 P.O. Box 616 Aiken, SC 29802 USA TEL: (803)725-8347 FAX: (803)725-8829 Email randal.sims@srs.gov Date: Tue, 6 Sep 94 11:24:03 PDT From: greg (Gregory J. Ward) To: randal.sims@srs.gov Subject: Re: Radiance vs. Participating Medium Hi Randy, I have only recently begun to dabble in participating media, following some advice and articles from Holly Rushmeier on the topic. I'm afraid that Radiance is not very well equipped to deal with this problem in its current state. I had toyed with the idea of modeling the particles themselves, since Radiance can handle even ridiculous scene complexity, but I think the resulting simulation would be so slow that it wouldn't be worth the bother. Instead, I have been using a shortcut that accounts only for some interactions, such as absorption of direct light and along eye rays, as well as approximate scattering from light sources to the eye. I do this by combining a change in the direct lighting calculation with a post-process using pcomb. If you are interested in seeing the results, I have just one picture of the lower deck of a ship with a uniform distribution of soot. (Non-uniform distributions might be modeled with this method, but the relation would have to be fairly simple.) Just let me know what format you want the image in, and I'll drop it in the /xfer directory on hobbes.lbl.gov. (It's there right now as a Radiance picture if that will do.) -Greg ========================================================================= MATERIALS Date: Mon, 12 Sep 1994 23:08:48 -0500 From: Dana Peters To: greg@hobbes.lbl.gov Subject: newbie Greetings, As the subject says I am a new user and have some very basic questions. I am currently an architecture student and have been using Wavefront for three years to create architectural walkthroughs and various other animations. I am familiar with very general rendering principles, but have very little experience with the graphics programing principles and the actual physical calculations. I do, however, have a basic understanding of C. I have installed Radiance, read the manual, worked the tutorial, and read most of the digests. Ok, now for the questions 1. I have no problem with the modeling, but I do need some help with materials. I understand the basic material types (plastic, metal, glow, light) but have problems thereafter, specifically those that require func files. What types of real materials have ansiotropic roughness used in plastic2, metal2, etc? What is the benefit of using this material type? What, in layman's terms, does a bidirectional reflectance distrobution function do to a surface? Can you give me some expamples of real materials that this is used to represent? I also have some problems with the difference between dielectrics and glass, but I think I can figure that out on my own. 2. I would like to start using Radiance extensively because of its wonderful and accurate renderings. Will it be necessary for me to learn how to write function definitions myself, or can I get by using those supplied in the software and those written by others? Learning how to write these would be helpful, but I currently have no idea how to define procedural textures, etc. 3. In the digests someone mentioned the possibility of having a radiance training class. Has this actually happened in the past? will it happen in the future? I am sure that a class would be the best way for me to learn radiance. (aside from hiring a personal tutor) 4. Concerning the files at your ftp site... are most of these included in the distrobution? for example, I noticed several libraries and some objects. Are these included somewhere or should I pick them up? Also what exactly are all those things in the tests/empty/... directory? just curious. Well, I guess that is enough for now. Sorry to bother you with such basic questions. Perhaps you could refer me to a good book or paper that might explain some of the basic stuff that I am lacking. There is no need to hurry with answers to these questions. I am not in any rush. thanks in advance, -Dana Peters peters@erc.msstate.edu Date: Tue, 13 Sep 94 10:37:11 PDT From: greg (Gregory J. Ward) To: peters@ERC.MsState.Edu Subject: Re: newbie Hi Dana, Good questions, all. It sounds like you have already investigated the Radiance Digest archives in the /pub/digest/ on hobbes.lbl.gov. Did you know that these have also been collected and indexed by Veronika Summerauer in HTML format, and are available on the Web from the Radiance page at: ftp://hobbes.lbl.gov/www/radiance/radiance.html Anyway, I don't think I've answered your questions before, so I'll attempt to do justice to them here. > 1. I have no problem with the modeling, but I do need some help with > materials. I understand the basic material types (plastic, metal, > glow, light) but have problems thereafter, specifically those > that require func files. What types of real materials have > ansiotropic roughness used in plastic2, metal2, etc? What is the > benefit of using this material type? What, in layman's terms, does > a bidirectional reflectance distrobution function do to a surface? > Can you give me some expamples of real materials that this is used > to represent? I also have some problems with the difference between > dielectrics and glass, but I think I can figure that out on my own. Yes, materials are difficult, aren't they? Plastic2 and metal2 are appropriate for surfaces like varnished wood and brushed or rolled metal -- anything that has elongated highlights. Note that you don't necessarily have to use a function file for these types. If the brushed direction is aligned with a vector and the surface is relatively flat, you can simply use a constant vector in the place of the "ux uy uz" variables, e.g: void metal2 brushed_aluminum 4 1 0 0 . 0 6 .7 .7 .7 .85 .02 .08 Since the .02 value corresponds to the roughness along the [1 0 0] vector, and the .08 value is the roughness in the perpendicular direction, the above material has a highlight that is narrower in the X direction. A bidirectional reflectance-transmittance distribution function (BRTDF, often called BSDF for bidirectional scattering distribution function), is a general function describing how light interacts with a surface material. You should use it only when all the other material types fail, as it is the most difficult to apply and the least efficient type in Radiance. One example where it might be needed is velvet, which has very peculiar reflectance properties. Retroreflective materials are another example. The difference between dielectric and glass is simply that glass simulates two close, parallel, dielectric surfaces. Glass is more efficient since it approximates the internal reflections in closed form rather than computing all the many rays that two dielectric surfaces would require. > > 2. I would like to start using Radiance extensively because of its wonderful > and accurate renderings. Will it be necessary for me to learn how to > write function definitions myself, or can I get by using those supplied > in the software and those written by others? Learning how to write > these would be helpful, but I currently have no idea how to define > procedural textures, etc. Unfortunately, this is something I have never documented. Suprisingly, a number of users have figured it out on their own and written procedural patterns and textures and contributed their work to the /pub/libraries/ directory on hobbes.lbl.gov. I suggest that you look at these as well as the .cal files distributed with Radiance and try to learn by example if you need to roll your own. > > 3. In the digests someone mentioned the possibility of having a radiance > training class. Has this actually happened in the past? will it > happen in the future? I am sure that a class would be the best way > for me to learn radiance. (aside from hiring a personal tutor) Yes, there was a workshop held for about 20 people in Spring of 1991, and another this Spring for only 5 people in Germany (disappointing turnout, but the tuition was rather high to pay for my travel out there). I am thinking of holding a Siggraph course on Radiance either next year or the year after. > > 4. Concerning the files at your ftp site... are most of these included in the > distrobution? for example, I noticed several libraries and some > objects. Are these included somewhere or should I pick them up? > Also what exactly are all those things in the tests/empty/... > directory? just curious. The main distribution contains source code, documentation and examples not found elsewhere on the ftp server. The /pub/* directories contain mostly Radiance-related contributions, and are distributed by tape to people who don't have ftp access. In most cases, there are README files describing the contents of each directory. The /pub/tests/ directory was designed to contain comparisons between Radiance and other global illumination calculations, but contributions over the years have been rather disappointing. -Greg ~s Radiance Digest, v2n8 (Part 1) Dear Radiance Users, Here is a long-overdue digest of mail between myself and some of you, dating back to October of last year. Because I've been so derelict in my moderator's duties, I had to break this into two parts since one part exceeded the 100K limit on many mailers. The second part, however, consists of just one discussion, which is rather long-winded and highly speculative. I figured there would be only a few people interested in that one, so I put it in a separate, ready-to-delete message. As usual, the digest is broken into easily searched topics. This message contains discussions on the following topics: DAYLIGHT CALCULATIONS - Daylight and sky simulation RADIANCE PORTS - Ports to the Amiga and Macintosh RENDERING PARAMETERS - Q&A on rpict and rad parameters SGI BUG - Core dumps on IRIX 5.x RADIANCE VS. POV AND RENDERMAN - Crude analysis of program diff's MIRROR ABUSE - What happens in a perfect funhouse? RETROREFLECTIVE SURFACES - Modeling retroreflection COLOR AND REFLECTANCE - CIE -> RGB and reflectance models GEOMETRIC MODELERS - What to use with Radiance? LENSES - Correct modeling of caustics HERMITE CUBIC FUNCTIONS - How to specify Hermite curves AMBIENT BOUNCES - Sorting out some test results BUMP MAPS - Converting height-field to texdata Enjoy! -Greg ======================================================================= DAYLIGHT CALCULATIONS From: kathrin schwarz Subject: radiance To: greg@hobbes.lbl.gov Date: Fri, 7 Oct 94 10:35:58 MEZ Dr. Ward, we have a short and simple question: How do we get the direct irradiance and diffuse irradiance in W/m/m in Radiance 2.3 ? Thanks Kathrin and Christof Date: Fri, 7 Oct 94 09:07:29 PDT From: greg (Gregory J. Ward) To: kathrin@prehp.physik.uni-oldenburg.de Subject: Re: radiance Radiance computes only total irradiance, using the -I or -i options of rtrace. If you wish to separate "direct and diffuse" for daylight calculation purposes, you must perform two calculations using different executions of gensky. For direct only, you must remove the sky source description and/or turn the interreflection calculation in rtrace off by setting -ab 0 -av 0 0 0. For diffuse only, you can use the -s option of gensky to remove the sun source, or simply do a full calculation and subtract the direct component computed above. I hope this makes some sense. I realize that it is confusing. The chief difference between Radiance and other lighting programs is that Radiance will not calculate anything that cannot be measured. Direct only and diffuse only portions can only be approximately measured by shading the photosensor from the solar direct. You can do this in Radiance also by creating a small shield in front of the sun source, if you like. -Greg Date: Sun, 19 Feb 95 15:22:41 CST From: sabol@zen.wes.army.mil (Bruce Sabol) To: greg@hobbes.lbl.gov Subject: Generating sky in RADIANCE Greg: I'm involved in a forestry remote sensing project in which we're trying to use RADIANCE to generate false color scenes of a natural forest in LANDSAT Thematic Mapper Bands (#2:0.52-0.62um {mapped to blue}, #3:0.63-0.69um {mapped to green}, and #4:0.75- 0.88um {mapped to red}). I need to better understand how GENSKY functions in order to set the direct solar flux and diffuse skylight in these bands. To assist in determining direct and diffuse irradiance values in these bands I'm using the atmospheric model LOWTRAN7. Below is an example .RAD file output from GENSKY, with some added documentation comments, followed by some questions. ------------------------------------------------ # Howland Maine 9/8/90 11 AM clear sky # gensky 9 8 11 +s -a 45.2 -o 68.75 -m 75 # Solar altitude and azimuth: 49.7 -12.8 # Ground ambient level: 18.3 # turbidity set at default value of 2.75 void light solar 0 0 3 6.88e+06 6.88e+06 6.88e+06 # (watts/rad sq/sq m) solar source sun 0 0 4 0.142792 -0.630758 0.762728 0.5 void brightfunc skyfunc 2 skybr skybright.cal 0 7 1 1.33e+01 2.37e+01 6.53e-01 0.142792 -0.630758 0.762728 # end GENSKY output, begin my add-ons skyfunc glow skyglow 0 0 4 0.03 0.4 1.0 0.0 # rgb radiance (w/rad^2/m^2), max radius # values set to look right - no physical basis for selection skyglow source sky 0 0 4 0 0 1 180 -------------------------------------------------------- Questions: 1. Direct Solar Flux. Turbidity was systematically manipulated from 1 to 10. Zenith brightness (variable A2 in skybright.cal) and ground plane brightness (variable A3) both increase with turbidity as expected. However, the RGB solar values were constant (at 6.88e+06) for all bands for all turbidity values. This was unexpected - as aerosol turbidity increases diffuse irradiance increases and direct solar flux decreases. Where does the value 6.88e+06 come from and why is it the same across all 3 bands and for all turbidity values? This is considerably higher than the red green or blue direct irradiance estimates from LOWTRAN7, although it is very close to the broadband visible radiance (0.4-0.7 um) value predicted by LOWTRAN7. Where should I be substituting in estimates of direct spectral flux in other bands? 2. Diffuse Irradiance. How is A2 (1.33e+01 in example above) computed and what does it represent? It appears very close to zenith sky irradiance in the visible band (0.4-0.7um) predicted by LOWTRAN. How does (or should) it relate to RGB values in skyglow? Should the spectral irradiances (RGB) be summed to equal A2? Any help you can give me on these questions would be greatly appreciated. Regards Bruce Sabol U.S. Army Waterways Experiment Station Environmental Laboratory 3909 Halls Ferry Rd. Vicksburg, MS 39180 sabol@zen.wes.army.mil Date: Tue, 21 Feb 95 12:20:05 PST From: greg (Gregory J. Ward) To: sabol@zen.wes.army.mil Subject: Re: Generating sky in RADIANCE Hi Bruce, In order to better understand the ouput of gensky and its meaning, you should refer to the file "skybright.cal" in the src/gen directory, which should be duplicated also in your Radiance library directory (wherever that is). I assume you have done this already, based on the questions you posed. Let me start by saying that I have little confidence in the sky or solar luminance calculations of gensky. They are based on some simple rule-of-thumb calculations and mean sky measurements taken years ago. If you are serious about your sky model (and it appears that you are), you should insert your own values for sky and solar luminance via the -b and -r (or -B and -R) options to gensky. This will override the turbidity calculation for zenith luminance, which as you noted, does not affect solar luminance as it ought. If you have access to LOWTRAN, I would recommend that you stick with its calculations for the relevant luminances. You will find even that the CIE- recommended model for clear and overcast skies is not very accurate. It is used more as a standard for calculation than anything else. There are better sky models floating about, but no official groups have yet gotten together and put their stamp of approval on them. Luminance can be computed from spectral radiance in Radiance with the approximate formula: cd/m^2 = 179 lumens/watt * (.265*R + .670*G + .065*B) Note that the default output of gensky is achromatic. This is simply so that renderings come out white-balanced. Since most people are after a picture rather than 4 digit accuracy, this is the default. The meaning of red, green and blue in Radiance is somewhat vague, and this is intentional. You can in fact define these to be whatever you want. If you wish that they represent integrated spectral power from lambda1 to lambda2, then that's what they represent. Of course, you have to figure out what to do with the pictures produced, as well as how to set the material reflectances appropriately, but the calculation is the same. Let me know if I can be of any more assistance. You might also try looking through the back issues of the Radiance digest, available by anonymous ftp from hobbes.lbl.gov in the /pub/digest directory, or conveniently indexed from our WWW site: http://radsite.lbl.gov/radiance/HOME.html -Greg Date: Mon, 27 Mar 1995 13:26:25 +0200 (MET DST) From: Maus Sender: Maus Subject: green-house questions To: greg@hobbes.lbl.gov Hi Greg, I'm trying your Radiance package to measure light efficiency in greenhouses under cloudy sky conditions. I'm specifically interested in the ratio between the light measured in a point just above the greenhouse and the light measured in a number of points 2 meters above the ground inside the greenhouse. I have some questions considering these simulations. Plants respond to the same portion of the spectrum as the human eye. But the human eye reponds best to green-yellow light while plants respond best to red-orange light. Furthermore,the response curve of the human eye is more or less a Gaussian curve while the responsecurve of a plant follows a saw-tooth. Am I right to say that I should not measure luminance like (.3*r + .59*g + .11*b) {watts} * 470 {lumens/watt} but substitute the rgb multipliers by other values to measure the light plants like best? How, thereupon, should I measure this specific light in points two meters above the ground? What exactly is the use of specifying a groundglow in addition to a skyglow. Is it to simulate reflection of light by air molecules? I read once that for glass, the 8% difference between 96% (transmission through the medium itself) and 88% (transmissivity) is due to reflection, and it varies with incident angle. Does this mean the 88% is a mean percentage for all possible angles of incidence, in other words, is it the percentage of light coming through the glass under diffuse lighting conditions ? (the conditions I'm interested in). Regards, Maurice Bierhuizen. Date: Mon, 27 Mar 95 09:26:14 PST From: greg (Gregory J. Ward) To: M.F.A.Bierhuizen@TWI.TUDelft.NL Subject: Re: green-house questions Hi Maurice, You are correct that you should not use photometric units (luminance or illuminance) to gauge the amount of plant-food light in a space. I don't know what factors you should use. However, it really doesn't matter unless your glass is tinted, because light is transmitted evenly over the visible spectrum for clear glazing. The 88% transmittance value is at normal (i.e. perpendicular) incidence, and the amount transmitted will decrease at higher incident angles. Radiance accounts for this variation, and asks that the user specify the transmission (amount of light not absorbed in one pass through the material) at normal incidence and (optionally) the index of refraction for the glass type. The ground glow accounts for light reflected from the ground towards the horizon, assuming that you have some local geometry for the ground in the vicinity of your model. It is not advisable in Radiance to specify the entire Earth as local geometry, as the difference between the largest and smallest object is then too great for the octree structure to manage. I hope this helps, and I wish you luck with your investigation. I assume you are using rtrace in your calculations. -Greg [P.S. I have been having a long discussion with Tony Yuricich about an advanced sky model he's been working on, and hopefully he will provide some code for us all in the coming months. I didn't include that conversation here because it was very specific and went on and on.] ======================================================================= RADIANCE PORTS Date: Thu, 20 Oct 1994 03:32:28 -0400 From: DWEINREBER@aol.com To: greg@hobbes.lbl.gov Subject: Disk space for Radiance I am a lighting designer in Nashville. I spoke with you on the phone a few months ago about Radiance. I am considering installing A/UX on my Centris 650 to run Radiance but I'm concerned about disk space. How much disk space is needed for Radiance? Are there any quirks or problems with running it under A/UX? Any plans on a PowerPC version? Thanks Dan Weinreber DWEINREBER@aol.com Date: Thu, 20 Oct 94 09:18:50 PDT From: greg (Gregory J. Ward) To: DWEINREBER@aol.com Subject: Re: Disk space for Radiance Hi Dan, A/UX itself requires about 120 Megs of disk space, and you should have at least 20 Megs of RAM installed to be comfortable. Radiance uses about 40 Megs of disk space on top of this, and I operate A/UX comfortably on a 160 Mbyte external drive. You can buy one from Apple with A/UX already installed, and it will save you some hassle (at the expense of a rather steep price per megabyte.) There are no quirks that I know of running Radiance under A/UX. Unfortunately, Apple does not plan to carry their A/UX product to the PowerPC platform, which to my mind is really ideally suited to it. Instead, they're going to let their former enemy, IBM, service the PowerMac with their persnickity AIX product. Radiance will run in this environment, but compiling it is a bit tricky because the compiler is so cranky. Also, AIX offers no access to your Macintosh applications, a key benefit of A/UX. I have no plans myself to port Radiance to the native Macintosh OS, though I just spoke yesterday with someone who might. It may happen, but not right away. -Greg Date: Sun, 30 Oct 94 00:35:30 +0100 From: Harald Backert To: greg@hobbes.lbl.gov Subject: Amiga port of Radiance 2.4 Hi Greg, maybe you remember me. I was the the one who wanted to port Radiance to the Amiga half a year ago. While my first attempt did not work as I expected (I was too busy trying to convert Radiance' K&R-C into ANSI-C, my fault). Then I concacted Per Bojsen in Denmark who made the first port and he told me that he will port a newer version of Radiance later. Well...half a year went by and nothing happened. So I started to compile Radiance again. And guess what: I now have a running and stable Radiance :-) I am going to upload my Amiga version to hobbes.lbl.gov including the slightly modified sources. I had to make small changes like inserting a '#include ' and the like. Now my question: I would like to upload a complete package of Radiance ready-to-run for Amigas. This includes the standard Radiance files, binaries and ASCII docs translated with 'nroff -man *.1'. And two support packages for pipes. May I? greetings, Harald Date: Sun, 30 Oct 94 07:40:16 PST From: greg (Gregory J. Ward) To: Harald.Backert@rz.uni-regensburg.de Subject: Re: Amiga port of Radiance 2.4 Hello Harald, Thank you for writing, and thank you for porting Radiance to the Amiga! In answer to your first question, please feel free to upload your complete Radiance package for the Amiga to the /pub/ports/amiga directory on hobbes.lbl.gov with a suitable title. I haven't checked write permissions on that directory, but if you have any trouble you can always upload it to the /xfer directory and tell me to move it. -Greg ======================================================================= RENDERING PARAMETERS Date: Wed, 2 Nov 1994 16:41:12 +0200 (METDST) From: Shaul Baruch To: Ward greg Hi Greg, I dont forward this to the discussion groupe because I dont think its so intersting to that forum. I have changed 2 parameters in my rpict bat file, in order to get a "cleaner" picture (more round contor lines of falsecolor). my old parameters were : -lr 8 -lw 0.005 -as 1024 -ad 512 -ar 4 -aa .1 -ab 4 -st .01 -ps 1 -dj .75 -pt .001 my new parameters are the same except for -aa .05 & -ab 6. With the old parameters together with a uniform cloudy sky and a 4x3 room with a skylight window (not using illum function), it took about 10 hours to generate a picture. With the new parameters it took 48 hours to complete 94.68 % of the picture but then I got a message: system out of memory in doambient, Radiance stub failed. Can you tell me why this happened and if there is any way to check such thing in advance? What would be your advise , in order to get a better (but realistic) contour lines? by the way my PC has 16mB ram (12.5mB extended ram) best regards Shaul Date: Wed, 2 Nov 94 09:14:46 PST From: greg (Gregory J. Ward) To: novebaru@inet.uni-c.dk Hi Shaul, You ran out of memory because you are storing so many values. You should probably increase -ad to 1024 and reduce -as to 512, use -ab 3 instead of 6 and set -aa to .1. Unfortunately, there's no way to know in advance how long a rendering is going to take or how much memory it will use. You just have to learn from experience with your particular problems. It varies quite a lot from one scene to the next. I would also recommend that you set the -av parameter to something non-zero, for more accurate results. You can use a zonal cavity approximation to do this in most cases. It's a bit difficult to do this for daylighting situations, unfortunately, but the formula I recommend is: ambient_value = (Sum_i PHI_i)/(pi * A_total) * /(1 - ) where: Sum_i PHI_i = sum of all source output flux (in watts) A_total = total surface area of all surfaces = area-weighted average of surface reflectance -Greg [The following was culled from the discussion group list:] Date: Fri, 2 Dec 94 11:01:50 PST From: greg (Gregory J. Ward) To: crones@puffin.curtin.edu.au, radiance-discuss Subject: Re: Radiance farming. Hey folks, I don't know why your renderings are taking so long, but somehow I feel that I'm to blame. (Now why is that?) I've been doing a bunch of renderings myself, lately, and they do take a while but I don't think I'd ever wait 400+ hours for a single picture! The last high-quality 1000x700 (or so) picture I generated took about 10 hours on my SGI Indigo R4000, and it was a fairly complex space. Since I want to encourage people to use rad (and the new GUI trad), I feel I should give some appropriate hints on minimizing rendering time or at least understanding why a particular rendering is taking so darned long. As most of you know already, the indirect calculation in Radiance is what makes it special and also what can make it especially slow if you're not careful about how you apply it. Using rad, most of the so-called "ambient" parameters (-a*) are derived from the following variables. I have arranged their values in order of least to most costly in calculation time: QUALITY= Low Medium High INDIRECT= 0 1 2 3 ... VARIABILITY= Low Medium High DETAIL= Low Medium High In addition to the above rad variables, the PENUMBRAS variable if set to "True" will significantly slow down most renderings. The RESOLUTION setting also has some effect, though not as much as you would suppose. Also, setting the AMBFILE variable is generally a good idea, since it improves the results noticeably without increasing rendering time by much. In fact, for multiple renderings of the same scene, rad will proceed much faster with an ambient file. Now, let me explain a bit how the above variables affect calculation time. The QUALITY variable has the greatest effect on rendering time, which is why I listed it first. A low quality rendering NEVER uses the indirect calculation, regardless of the setting of the INDIRECT variable, unless overridden by the render options variable. (We'll discuss when you might want to do this a little later.) A medium quality rendering uses as many bounces as indicated by the INDIRECT setting, and a high quality rendering uses INDIRECT+1 bounces in its calculation. Even more significant is how the QUALITY variable changes all the other rendering parameters that affect accuracy. Since these changes are too numerous to list exhaustively, suffice it to say that there's a BIG difference between the times and results for different settings of the QUALITY variable. The next most important variable after INDIRECT, which controls the number of bounces in the calculation (along with the QUALITY setting as described above), is the VARIABILITY variable. This tells rad just how hard it is to compute the indirect component at any given point in the scene. If the variability is low, then we don't have to send as many samples around to figure out the indirect contribution. If VARIABILITY is set to medium, then we send about three times as many samples, and space our calculations more closely. If VARIABILITY is set to high (something I don't recommend unless you have direct sun penetrating your space), then about 1500 samples will be sent out at each calculation point, and there will be roughly 4 times as many of these points as there would be with a low setting. Therefore, all else being equal, you should expect a VARIABILITY=High calculation to take roughly 25 times as long as a VARIABILITY=Low calculation -- something to think about! Finally, the DETAIL variable has a modest effect on the calculation time, as it controls the "ambient resolution" calculation parameter, which determines the minimum spacing between indirect calculation points. A low detail sene means we can afford to limit our indirect value density to a modest level compared to a medium or high detail scene. The precise effect this will have depends on the geometry of the particular scene, but it generally doesn't have more than a 10 times effect moving from a low to a high setting. Unfortunately, the above information is not enough to predict how long a given rendering will take to complete. (The best indicator still is the time elapsed so far and how much has finished, as given by rpict's reporting facility.) However, there are some important scene-related factors that we should consider. The most important scene characteristic that affects rendering time is geometric detail. (Note that this would seem to contradict my placement of the DETAIL variable as the least important setting. While it is true that the setting of this variable has a minor effect, the ACTUAL scene detail has a rather major one. In other words, changing the DETAIL variable from "medium" to "high" might double the rendering time, but increasing the actual scene complexity will have a much greater effect.) This is because Radiance adjusts the calculation of indirect illumination to the local scene complexity, unlike most radiosity rendering algorithms. (Thus Radiance maintains accuracy with the minimum effort.) Increasing the number of do-dads and knick-knacks therefore increases the number of indirect calculation points required to maintain accuracy. A worst-case scenario for Radiance is something like a packed forest, where every pine needle is participating in the indirect calculation. In scenes of such complexity, a different approach is required, and that's when we bring in the render variable to override some of the settings determined by rad. In the worst-case of a forest given above, we would probably want to turn off the indirect value storage and interpolation altogether, and greatly reduce the number of sample rays sent out, i.e: render= -ab 1 -aa 0 -ad 10 -as 0 Here we are forcing a 1-bounce calculation (more than that would be painful), set -aa to 0 to turn off interpolation, and using just 10 samples over the hemisphere at each point. The resulting picture will be somewhat noisy, but a forest is not a visually quiet place, so chances are no one will notice. (And if a tree falls, no one will hear it.) The second, more common case encountered is one where most of the geometry is fairly plain, consisting of walls, furniture and the like, but parts of the scene are incredibly detailed, like rows and rows of books in an enormous bookcase. We want to use our indirect interpolation technique to get the smoothest possible results over most of the scene, but if we apply it also to the bookcase, our rendering will never finish. What you should do in such a case is employ the "ambient selection" options, -ae and -aE to explicitly exclude materials from the indirect calculation, or -ai and -aI to include them. (You can use only one set or the other.) Thus, you "remove" the offending objects from consideration (giving them the default -av ambient value), speeding the overall calculation. Since the objects removed have a large amount of geometric detail, the loss in illumination detail will probably go unnoticed. I use this technique all the time in my own renderings, and it is one of the chief tricks that make high-quality renderings tractable. (Ultimately, a better solution would be automatic geometric simplification, but the problems involved are nasty, nasty and nasty.) For example, let's say we had venetian blinds on the window that were slowing down our nighttime calculation. The material used for the blind slats is called "slat_mat". We would then add the following setting to our render variable: render= -ae slat_mat If there were other materials involved, we could list them in additional -ae options, or use an -aE "file" option, where the file contains a list of the appropriate materials. I hope this helps to shed some light on a very murky subject. -Greg P.S. In answer to Simon Crone's question about running rpiece more elegantly, I usually use the OPTFILE variable of rad to create a rendering options file, then apply that to rpiece, like so: % rad -n -s scene.rif OPTFILE=scene.opt % rpiece @scene.opt [other options] scene.oct & Date: Tue, 6 Dec 94 15:41:24 -0500 From: westin@dsg145.nad.ford.com (Stephen H. Westin) To: greg@hobbes.lbl.gov Subject: Radiance newbie question Greg, I'm finally trying to make some real pictures with Radiance. I'm struggling, though. My images are very speckly in what seems the specular component of a rough surface. Direct lighting is fine, but the (diffuse) reflection of the environment is extremely noisy. Which of the ninety-'leven parameters to "rpict" should I tweak? I have tried -pt, -st, -ab, and a couple of others I've forgotten. "-ps .2" gives the message rpict: fatal - command line error at '-ps' so I can't use that. I'll E-mail you a uuencoded image file; I'm sure it's something simple. The material I'm using is void metal CHAMPAGNE 0 0 5 0.7 0.641 0.58 1 .4 and yes, I know that you don't recommend a roughness greater than 0.2, but it doesn't seem to make a lot of difference. I'm trying to create some approximation of metallic paint, which gets its main color from metal and pigment particles suspended in the binder, but includes a clear coat that reflects in an ideal specular fashion. I haven't dug into this deeply enough to see if I can do this directly in Radiance; I would like a "metal" with a "plastic" or "glass" overlayed. By the way, it would be helpful if your documentation could be expanded; at least tell me what the default value is for each parameter, so I have some idea whether I've tweaked it or not. Any further elucidation as to the function of each would also help. -Steve Date: Tue, 6 Dec 94 13:31:24 PST From: greg (Gregory J. Ward) To: westin@dsg145.nad.ford.com Subject: Re: Radiance newbie question Hi Steve, The speckle effect you're seeing is due to the fact that rpict sends at most one sample per pixel to the image plane. If you want smooth results, you have to reduce the image using pfilt with the -x and -y options. I usually use the "rad" interface to control rendering, and I highly recommend that you do, too. It controls many of those nasty parameters based on some more intuitive scene characteristics. I agree wholeheartedly that the documentation is seriously lacking, which is why I've started writing a book on Radiance. The many parameters are confusing, even to me. To see the default settings, type: rpict -defaults This also offers a brief reminder of their meaning. I meant at one time to write a document describing each in gory detail, but decided instead to write the rad program to insulate users as much as possible. The ultimate solution is a book explaining the calculation techniques in Radiance and how the parameters relate to them. In the meantime, you're stuck with the rpict man page and the notes in ray/doc/notes/rpict.options. I'd be happy to help explain particular options you're having trouble with. The -ps option takes an integer, which is why you got the cryptic error message. I have updated the manual to make this more clear for the next release. I've also found it useful to have the Radiance Reference Manual online, and it is accessible on the Web at "http://radsite.lbl.gov/radiance/HOME.html". For quicker access, I recommend downloading the desired pages. Metallic paint is kind of tricky. I haven't worked with this much, myself. You might try modeling it as plastic, giving the diffuse component the desired color. I don't think this will produce the sparkle that comes off of metal flakes, though. What you've got (without anti-aliasing) actually looks pretty good in that regard, even though it was achieved by improper means. The ultimate solution may lie in the creative application of the BRTDfunc type, which takes expressions as its arguments. Nasty, but very flexible. I'm flattered that you're working with this. I hope you don't get too frustrated early on and give up... I'd miss the feedback. -Greg ======================================================================= SGI BUG To: GJWard@lbl.gov Subject: Radiance..... Date: Wed, 02 Nov 94 11:37:27 +0000 From: David Hedley Hello, I am research student in Computer Graphics at the University of Bristol, England and I recently attended a workshop on Radiance given by Kevin Lomas and John Mardeljevic at the De Montford University, Leicester. I was very impressed by the results obtained and I am very interested in working on (and with) the program. I am, however, having some difficulty in getting the program to work correctly on my SGI Indigo. The program renders some of the demo scenes correctly (townhouse, soda, bath), but it core dumps when trying to render `conf', leaving no indication in the core file where the error occured. This happens irrespective of the C compiler used (I have used gcc 2.5.8 and the standard SGI ANSI C compiler), and is not dependent on any optimisation flags. My system is as follows: SGI Indigo (R3000) 32MB Ram IRIX 5.2 If you can help at all I would be very grateful.... keep up the good work! David Date: Wed, 2 Nov 94 09:02:23 PST From: greg (Gregory J. Ward) To: hedley@cs.bris.ac.uk Subject: Re: Radiance..... Hi David, There is a bug in the IRIX 5.2 libfastm.a math library which sometimes causes core dumps on the SGI's. You should recompile (or at least relink) all of the programs, removing the "MLIB=-lfastm -lm" word from the rmake script in your Radiance executable directory. Hopefully, this will solve your problem. (A modified version of makeall is available along with a few other patches from the /pub/patch directory on the anonymous ftp account of hobbes.lbl.gov.) -Greg ======================================================================= RADIANCE VS. POV AND RENDERMAN Date: Sat, 12 Nov 1994 13:05:29 -1000 To: greg@hobbes.lbl.gov From: aersloat@uhunix.uhcc.Hawaii.Edu (Austin Sloat) Subject: Radiance Greg, I was wonderig if you could give me an idea of the benefits/limitations of Radiance as compared to POV and also Renderman. This is mostly for others. Thanks, Austin Date: Sun, 13 Nov 94 09:58:32 PST From: greg (Gregory J. Ward) To: aersloat@uhunix.uhcc.Hawaii.Edu Subject: Re: Radiance Hi Austin, Here is my biased analysis of Renderman and POV compared to Radiance. You'll have to ask on network news to hear from some neutral party who has used these for a real evaluation. Renderman ========= + Support of many textures and geometric primitives. + General programming interface for shading calculations (local illumination). + Links to many commercial geometric modeling packages, particularly on Mac. + Can produce beautiful pictures and animations with motion blur. - Lighting calculation is not physical, and approximations are unreliable. - No numerical output of light levels, etc. - Costs money and no source code. POV === + Easy to read input language. + Comes with many canned texture functions and surface primitives. + Wide user support base. + 3-d modeler available. - Non-physical lighting calculations that are very difficult to circumvent. - No numerical output. - Materials, shading models and textures cannot be modified except in source. - Slow compared to Radiance for identical rendering tasks. Hope this helps. -Greg ======================================================================= MIRROR ABUSE From: COURRET Gilles To: greg@hobbes.lbl.gov Subject: Radiance problem Status: RO Hi greg! I have a problem with rtrace on a scene composed by a conventional uniform sky and ground "cou_uni.rad"+"outside.rad" and a building "indor_shed_mir.rad"+"shed.rad". It is one of my first calculation with the material: "void metal gray_paint 0 0 5 1 1 1 1 0" (see indor_shed_mir.rad) which is equivalent to: "void mirror gray_paint 0 0 3 1 1 1" isn't it? (may be, "metal" takes more time of calculatinon!) anyway, the rtrace commande produce the following message: "rtrace: fatal - possible modifier loop for polygon "mur_est" All the files involved in this problem are available in the tar_file i put on your server: "xfer/pb221194.tar" The version i used is: SOFTWARE= RADIANCE 2.4 official release April 20, 1994 This scene runned before without any trouble with all the same calculation parameters except -lr that was 50 instead of 5000. Thus, i suppose a problem (perhaps memory problem) arises because the specular multireflection iteration goes to far. You can see in the file "indor_shed_mir.rad" that the 4 walls made of mirror are parallele two by two! My purpose is to simulate a building with infinite width and length (or at least to approche such an asymptotyque situation). That is why i need to let it go so far! I thank you in advance for your help, Gilles. Date: Tue, 22 Nov 94 09:25:31 PST From: greg (Gregory J. Ward) To: courret@divsun.unige.ch Subject: Re: Radiance problem Hi Gilles, Using metal is not the same as mirror, since the latter (mirror) reflects light sources and metal does not. The failure reported by the program is due to the absurd number of reflections, which trips a loop detection test. If this test had not tripped, your run would continue into the next millenium, probably still working on the first scanline. I recommend against modeling an infinite room in this way. Always remember the basic tenet of Radiance, which is "if you can build it and measure it in the real world, you can model it and simulate it in Radiance." The converse is also true, i.e. "if you cannot build it and measure it in the real world, you cannot model it and simulate it in Radiance." In other words, you should model your "infinite" space as simply a very large space, not a space with perfect mirrors for walls. -Greg ======================================================================= RETROREFLECTIVE SURFACES From: "Nick C. Bellinger" Subject: Retro-reflective surfaces in Radiance To: GJWard@lbl.gov Date: Mon, 5 Dec 1994 16:31:12 -0500 (EST) Greg, It has been suggested by a number of people, that I use Radiance to obtain simulated images on an optical nondestructive inspection technique which was developed in Canada. The technique involves the light from a light source reflecting off an object and this reflective light hitting a retro-reflective surface. The surface of the object being examined is treated with a liquid to improve its reflectivity. The light hitting the retro-reflective surface is reflected back onto the surface which then reflects light to a camera which is nearly coincident with the light source. The surface of the object (aluminum) contains small defects, such as dents. We are carrying out finite element analysis on lap splices which contain corrosion products. We are trying to simulate the out-of-plane displacements which are caused by the corrosion. These FEM results are then transformed into triangular surfaces and imported into Radiance using tmesh2raw. I did not include the surface normal vector in this file since I am not quite such how one gets these values from raw data. This gives the surface of the object we want to examine. A scene then must be created which includes a light source (a halogen source is used in the actual technique), a retro-reflective surface and a camera. I would like to get your opinion on whether Radiance can model this situation, particularly the retro-reflective surface. If you think it can, can use suggest how to go about modeling the surface. Thanks Nick Bellinger National Research Council Canada e-mail: ncb@m14challenge.iar.nrc.ca Date: Mon, 5 Dec 94 14:15:46 PST From: greg (Gregory J. Ward) To: ncb@m14challenge.iar.nrc.ca Subject: Re: Retro-reflective surfaces in Radiance Hi Nick, I believe that Radiance should be able to model your scene. Others have used a simple trick to create retroreflective surfaces, which is to use a reflective metal surface and adjust the surface normal to always point in the direction opposite the incoming ray. It's a bit of a cheat, but it does work. Another alternative available to you in this case is to create the physical geometry of a retroreflective mirror, which generally consists of many cube corners. For the first approach, try: void texfunc retro-normal 4 -Dx-Nx -Dy-Ny -Dz-Nz . 0 0 retro-normal metal retro-mirror 0 0 5 .95 .95 .95 1 0 retro-mirror polygon retro-reflector ...etc. Above I am assuming that the retro-mirror has 95% reflectance. I your situation, you're probably not so concerned about total light flux, so the actual reflectance value may not be so important. I hope this helps, and let me know if you have any other questions. -Greg ======================================================================= COLOR AND REFLECTANCE From: sick@ise.fhg.de Subject: To: greg@hobbes.lbl.gov (gregory ward) Date: Tue, 20 Dec 1994 14:15:51 +0100 (MEZ) Hi Greg, we interrupted a several day long discussion among several colleagues on spectral calculation with RADIANCE. I think with one key question answered by you we can resume discussion after x-mas. Here is it: What integral radiance value has a RADIANCE light source with 100 50 10 RGB radiance values? We find only white sources and a confusing statement that for a particular channel the total watts/sr/m2/spectrum are given, which would indicate that no colored light sources are possible. However, we would like to do spectral calculations (even if they are limited to 3 channels) and even - in general - independent of the color representation on the monitor (we consider that a separat issue). Thanks a lot for your advice and we wish you a merry christmas and a healthy and happy and successful new year! Best regards, Fred Sick -- ---------------------------------------------------------------------------- Friedrich Sick MAIL : Fraunhofer Institute for Solar Energy Systems Oltmannsstr. 5 D-79100 Freiburg Germany PHONE: +49-(0)761-4588-133 FAX: +49-(0)761-4588-132 email: sick@ise.fhg.de ---------------------------------------------------------------------------- Date: Wed, 21 Dec 94 11:32:32 PST From: greg (Gregory J. Ward) To: sick@ise.fhg.de Hi Fred, Color is indeed a confusing subject in Radiance. The RGB system used by default is non-standard, simply because the only existing standards for RGB color do not match typical computer monitor phosphors at all. I have recently modified the color conversion routines in Radiance to allow the user to redefine the (x,y) chromaticity coordinates corresponding to the canonical phosphors used, and in this way set the color system. It is impossible to know what the total radiant energy of a light source is based on RGB settings, since they say nothing of invisible radiation. As you know, for most light sources (incandescents especially), much of the radiated energy is in the infrared and therefore not considered in the setting of Radiance RGB values. However, most of the code in Radiance does not hinge on the actual meaning of the RGB channels -- they can be used to represent whatever wavebands you decide. You don't even have to modify the code, just go ahead and use different values. The results will have to recombined or something, for they won't be displaying true colors on a computer monitor otherwise, but that's the only real difference. I believe I addressed this problem a number of times in past digests, which you can peruse at our WWW site: http://radsite.lbl.gov/radiance/HOME.html To better answer your question, though, let's assume you have a light source whose Radiance RGB values are set to "100 50 10". The output of this source in lumens would then be: 179 lumens/watt * (.265*100 + .670*50 + .065*10) or 10856 lumens. These coefficients were taken from ray/src/common/color.h, where all such things are stored. As I said before, it is not possible to determine the radiant energy from the source, since we know only about its output in the visible spectrum. I hope this was more help than confusion. I'm sick, and not thinking too clearly. -Greg From: Peter Apian-Bennewitz Subject: spec,roughness and rho To: gjward@lbl.gov (Greg Ward) Date: Wed, 8 Feb 1995 13:03:08 +0000 (GMT) Dear Greg, please pardon if this is FAQ, I haven't paid much attention to the digest lateley. Do you have material on the subject of specifying isotropic plastic and metal parameters (one color channel,spec,roughness) from direct-diffuse and direct-direct measurements ? Direct-direct could be at normal and 45 degrees incident angle, outgoing at the geometrical reflectance angle. The idea is to get these basic Radiance Parameters from some easily measured quantity. Three radiance parameters - three measurement values. should work - ? As compared to fitting the curves to full fledged BRTF measurements - ? My this-integral-is-easy feeling is not well developed - How follows direct-diffuse reflection from the isotropic model ? It's easier to ask before feeding it to mathematica, in case you have it shelved. Are there more tutorial guidelines available or people rely on your expertise explaining to them in person ? :-) Don't want to offence you- Just got a bit sidetracked from writing my thesis together and got stuck with the reflection model. The key point being to align the radiance parameters and photometric terms. Obviously they are close. Just wondering wether I'm rediscovering the wheel here... What do other people when giving a paint or material to simulate the room with? As always, thanks for time, help and info Peter -- Peter Apian-Bennewitz, apian@ise.fhg.de Date: Wed, 8 Feb 95 10:00:50 PST From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: spec,roughness and rho I assume you have my 1992 Siggraph paper on reflection already? If not, I should certainly send you that one. You also didn't ask for my 1994 paper, so I assume you have it or don't know about it. Which is it? (I can't remember what I've sent to you before.) Unfortunately, taking a few luminance measurements from a surface under known lighting conditions is not enough to characterize the reflectance even in terms of a simple isotropic Gaussian model. I have tried myself to develop such techniques, only to find that it couldn't be done reliably. The problem is two-fold. One, spot luminance values from a surface with a highlight are extremely unstable -- the slightest shift in position causes a radical change in the value. Two, except for very rough surfaces, it is impossible to measure the highlight with a normal luminance meter -- the spot is simply too small. Direct/diffuse measurements work only if the surface has a perfect diffuse component and a perfectly smooth specular component. If the specular highlight is spread out, then one must characterize this spreading, which is nigh impossible with standard photometric equipment. The best way to do it aside from measuring the complete BRDF is to do it by eye, believe it or not. I am currently pursuing this approach with a portable device and some standard samples. The idea is to match the material in question to a sample to discover its actual roughness. The concept seems good, but remains to be proven. I'm sorry if this didn't answer your question. It's a tough one, all right. -Greg From: gerold@ise.fhg.de Subject: metal and plastic To: greg@hobbes.lbl.gov Date: Thu, 23 Mar 1995 11:04:16 +0100 (MEZ) Dear Greg, during my work to calculate BRDF data with RADIANCE I had a problem with "Plastic" and "Metal". That means "Metal" is just fine because it behaves as I would think but "Plastic" does not. But let me tell you what I've done: 1. I created a horizontal surface with the following material description: void plastic spieg 0 0 5 1 1 1 0 0 2. The sun altitude is 45 deg, the azimuth 0 deg. The view point is along the direct reflected ray from the surface. The view direction is along this direct reflection. 3. I calculated the radiance with "rtrace", the ambient bounces are set to 1 Result: L = 91 W/m2sr what is totally fine 4. I changed the specularity to 1 (even if this doesn't make sense) and got a radiance of 6.77e6 W/m2sr what is also absolutely fine. Changing the specularity again to 0.5 gives me 3.39W/m2sr, that means exactly half of the forgoing result --> great 5. Then I changed the RGB values to 0.5 without getting any different results to RGB = 1. The only exception is if the specularity is 0. 6. If I change the material to "Metal" everything is like I would expect it. The radiance I get depends on the specularity as well as on the RGB values. Well, in "The RADIANCE 2.4 Synthetic Imaging System" and in the "Behavior of Materials in RADIANCE" you mention that plastic is a material with uncolored highlights, that means independent of the RGB values (except for the specularity of 0). So, now there comes my question: What kinds of materials should be modelled with plastic? The ones with a specularity less than 0.1? If so the same problem occurs, there is no dependency on RGB and I have a hard time convincing my tummy to accept this. For my sense the behaviour of metal is the more real one so I would prefer it to plastic even if the specularity is <0.1. Am I totally wrong with my knowledge of plastic materials or is the secret in the roughness which was 0 in my cases? I am sorry to bother you with this and thanks a lot for your answer. Gerold -- Gerold Furler MAIL: Fraunhofer Institute for Solar Energy Systems Oltmannsstr. 5, 79100 Freiburg, Germany PHONE: +49 (761) 4588 308 FAX: +49 (761) 4588 100 EMAIL: gerold@ise.fhg.de Date: Thu, 23 Mar 95 09:31:22 PST From: greg (Gregory J. Ward) To: gerold@ise.fhg.de Subject: Re: metal and plastic Hi Gerold, The nature of plastic is one where specularly reflected light is not affected by the color of the material. Specifically, light reflected as part of the specular component is always white. This is a good model for surfaces such as plastic, marble, varnished wood, and most painted surfaces. The physical principle behind this model is that specular reflections happen at the outer surface, before the light encounters any dye particles. Metal is different, because the surface itself is a sort of dye. It is an approximation, but a very good one for most materials. -Greg Date: Mon, 27 FEB 95 16:23:36 BST From: HKM5JACOBSA@CLUSTER.NORTH-LONDON.AC.UK To: greg Subject: CIE colour system Hi, Greg I am currently using RADIANCE to simulate sportshalls that are to be built in the near future. The sportscouncil don't know what kind of luminaires are best suited to match the requirements, which are relatively high especially when the halls are used for badminton. So I try to help them to make this decision by rendering computer images using different types of fittings. My problem now is that I have the colourdata for the walls, floor and roof paint given in CIE notation. For example: Silver Grey is defined as x = 20 y = 10 on page R90B (red plus 90 % blue) light reflectance factor = 50.22 % But in RADIANCE I have to specify the materials by means of red, green, and blue. How can I convert the data into the proper format? I would be glad if you could help me solving this problem. I could find nothing on that in the digest. I am loocking forward to hesring form you. Thanks a lot! Axel Date: Mon, 27 Feb 95 09:08:40 PST From: greg (Gregory J. Ward) To: HKM5JACOBSA@CLUSTER.NORTH-LONDON.AC.UK Subject: Re: CIE colour system Hi Axel, It just so happens that I recently finished a conversion routine in .cal format for RGB <=> XYZ translation. I am enclosing it below. Cut out this data and put it a file. (I called mine "xyz_rgb.cal".) Then, use calc or rcalc for your conversions, like so: % rcalc -f xyz_rgb.cal \ -e '$1=R(Xi,Yi,Zi);$2=G(Xi,Yi,Zi);$3=B(Xi,Yi,Zi)' \ -e 'Yi=$1;Xi=$2/$3*$1;Zi=$1*(1/$3-1)-Xi' Then, on the standard input, give the Yxy triples (separated by spaces or tabs). Rcalc will produce the corresponding RGB triples on the output. (That last rather nasty-looking expression simple converts Yxy to XYZ.) Let me know if you have any troubles with this. -Greg ---------------------------------------------------------------------- { Convert between XYZ and RGB coordinates. 2/17/95 Be sure that CIE_x_r, etc. definitions are consistent with those in ray/src/common/color.h. } {*** The whole calculation is based on the CIE (x,y) chromaticities below ***} CIE_x_r : 0.640 ; { nominal CRT primaries } CIE_y_r : 0.330 ; CIE_x_g : 0.290 ; CIE_y_g : 0.600 ; CIE_x_b : 0.150 ; CIE_y_b : 0.060 ; CIE_x_w : 1/3 ; { use true white } CIE_y_w : 1/3 ; WHTEFFICACY : 179. ; { luminous efficacy of uniform white light } { Derived constants } CIE_D : CIE_x_r*(CIE_y_g - CIE_y_b) + CIE_x_g*(CIE_y_b - CIE_y_r) + CIE_x_b*(CIE_y_r - CIE_y_g) ; CIE_C_rD : (1./CIE_y_w) * ( CIE_x_w*(CIE_y_g - CIE_y_b) - CIE_y_w*(CIE_x_g - CIE_x_b) + CIE_x_g*CIE_y_b - CIE_x_b*CIE_y_g ) ; CIE_C_gD : (1./CIE_y_w) * ( CIE_x_w*(CIE_y_b - CIE_y_r) - CIE_y_w*(CIE_x_b - CIE_x_r) - CIE_x_r*CIE_y_b + CIE_x_b*CIE_y_r ) ; CIE_C_bD : (1./CIE_y_w) * ( CIE_x_w*(CIE_y_r - CIE_y_g) - CIE_y_w*(CIE_x_r - CIE_x_g) + CIE_x_r*CIE_y_g - CIE_x_g*CIE_y_r ) ; { Convert CIE XYZ coordinates to RGB } XYZ2RGB(i,j) : select(i*3+j+1, (CIE_y_g - CIE_y_b - CIE_x_b*CIE_y_g + CIE_y_b*CIE_x_g)/CIE_C_rD, (CIE_x_b - CIE_x_g - CIE_x_b*CIE_y_g + CIE_x_g*CIE_y_b)/CIE_C_rD, (CIE_x_g*CIE_y_b - CIE_x_b*CIE_y_g)/CIE_C_rD, (CIE_y_b - CIE_y_r - CIE_y_b*CIE_x_r + CIE_y_r*CIE_x_b)/CIE_C_gD, (CIE_x_r - CIE_x_b - CIE_x_r*CIE_y_b + CIE_x_b*CIE_y_r)/CIE_C_gD, (CIE_x_b*CIE_y_r - CIE_x_r*CIE_y_b)/CIE_C_gD, (CIE_y_r - CIE_y_g - CIE_y_r*CIE_x_g + CIE_y_g*CIE_x_r)/CIE_C_bD, (CIE_x_g - CIE_x_r - CIE_x_g*CIE_y_r + CIE_x_r*CIE_y_g)/CIE_C_bD, (CIE_x_r*CIE_y_g - CIE_x_g*CIE_y_r)/CIE_C_bD ); noneg(x) : if(x, x, 0); R(X,Y,Z) : noneg(XYZ2RGB(0,0)*X + XYZ2RGB(0,1)*Y + XYZ2RGB(0,2)*Z); G(X,Y,Z) : noneg(XYZ2RGB(1,0)*X + XYZ2RGB(1,1)*Y + XYZ2RGB(1,2)*Z); B(X,Y,Z) : noneg(XYZ2RGB(2,0)*X + XYZ2RGB(2,1)*Y + XYZ2RGB(2,2)*Z); { Convert RGB to CIE XYZ coordinates } RGB2XYZ(i,j) : select(i*3+j+1, CIE_x_r*CIE_C_rD/CIE_D,CIE_x_g*CIE_C_gD/CIE_D,CIE_x_b*CIE_C_bD/CIE_D, CIE_y_r*CIE_C_rD/CIE_D,CIE_y_g*CIE_C_gD/CIE_D,CIE_y_b*CIE_C_bD/CIE_D, (1.-CIE_x_r-CIE_y_r)*CIE_C_rD/CIE_D, (1.-CIE_x_g-CIE_y_g)*CIE_C_gD/CIE_D, (1.-CIE_x_b-CIE_y_b)*CIE_C_bD/CIE_D ); X(R,G,B) : RGB2XYZ(0,0)*R + RGB2XYZ(0,1)*G + RGB2XYZ(0,2)*B; Y(R,G,B) : RGB2XYZ(1,0)*R + RGB2XYZ(1,1)*G + RGB2XYZ(1,2)*B; Z(R,G,B) : RGB2XYZ(2,0)*R + RGB2XYZ(2,1)*G + RGB2XYZ(2,2)*B; { Convert spectral radiance in watts/sr/m^2 to luminance in cd/m^2 } luminance(r,g,b) = WHTEFFICACY * Y(r,g,b) ; Date: Thu, 2 Mar 95 17:05 BST From: HKM5JACOBSA@CLUSTER.NORTH-LONDON.AC.UK To: GREG Subject: Re: CIE colour system Hi, Greg! The new command line is working now. Thank's a lot. It is a very useful tool for my work. Just to make it sure: x and y have to be within the borders of zero and one, and so should Y, is that right? I have the reflectance given in per cent (0...100%). So all I need to do is divide this value by 100, is that correct? The values I got using the formula like this seem to bethe right ones to me. Another question I have is a little bit more general. Do you happen to have information on all the different colour systems (scandinavian, rgb, cmyk, etc.) and ways of converting them into each other? I couldn't find anything in our library. Can you name any sources of information such as publications, book, mags? Where did you get the algorithm you used in the xyz_rgb.cal from?? Thank you for all the help. Bye, bye! Axel Date: Thu, 2 Mar 95 15:27:53 PST From: greg (Gregory J. Ward) To: HKM5JACOBSA@CLUSTER.NORTH-LONDON.AC.UK Subject: Re: CIE colour system Hi Axel, Yes, all values input to rcalc in this script should be between 0 and 1. You will find that highly saturated colors will not convert well, since it is easy to go out of the smallish, triangular gamut defined by the three phosphor chromaticities defined in xyz_rgb.cal. The algorithm in xyz_rgb.cal was taken from the book "Procedural Elements for Computer Graphics" by David Rodgers (McGraw Hill). The actual phosphor values used were taken from some other place, and I believe them to be more or less typical of modern computer workstations. As far as I know, the only standards defined for RGB conversion are in the TV broadcasting industry, and they are not representative of computer monitors. It is best to stick with CIE color systems wherever possible, and actually measure phosphors on the destination device if you want accurate color representation. -Greg From: Peter Apian-Bennewitz Subject: brtf files and glow light sources To: gjward@lbl.gov (Greg Ward) Date: Sat, 1 Apr 1995 15:27:33 +0000 (GMT) Dear Greg, hope you have a nice weekend, and that tube time is not all weekend long. - I finally sat down and tried to solidify my views on how brtf functions and glow work together: 1. It's RTFM, that the brtf functions govern rho-si, not rho-a, so the ambient calculations don't care about the brtf. This is consistent with my tests. However rho-a is not zero where I would guess it should be: void transfunc tf 2 at at.cal 0 6 0 0 1 0 1 1 at.cal: at(lx,ly,lz)=0 When lit with source glow from the backside, a polygon with tf material is still shining. Why ? 2. When applying mkillum to the above scene, the brtf doesn't seem to be applied either. Using "#@mkillum l+ d=200 s=100 i=tf m=mk_func c=a b=0". Am I right at visualizing mkillum shooting rtrace rays into the backside space - and shouldn't these rays find the glow source? Since I'm not quite sure wether I missed something here, I take the easy way and ask... help, as always, is much appreciated, cheers Peter -- Peter Apian-Bennewitz, apian@ise.fhg.de Date: Mon, 3 Apr 95 11:24:05 PDT From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: brtf files and glow light sources Hi Peter, This is confusing and you will be disappointed to hear that the problem is a lame BRTF calculation that doesn't know how to sample according to an arbitrary distribution function. I was surprised to hear that you were getting some ambient from your purely specular transfunc material, but once I looked at the code in m_brdf.c, I remembered that I multiplied the ambient value from the reverse side by the full transmission quantity rather than just the diffuse quantity, and the reason was that this light was not accounted for properly in the "specular" component. You see, I don't know how to efficiently sample an arbitrary BRTDF, so although Radiance includes the specification for it, it only computes this component as part of the direct calculation. Only the standard Guassian reflectance function (plastic, metal, trans, plastic2, metal2 and trans2) is really a complete calculation, since I have a formula for Monte Carlo sampling of those distributions. Others who have sampled arbitrary functions have used a uniform sampling over the hemisphere and weighted the samples by the BRDF, which is terribly inefficient. Computing which samples to take is a very hard problem, and requires large amounts of processing time for each sample or else huge lookup tables saying where to sample. I hope to be working on this problem in the next year or two, but with the book and my other projects, it isn't going to be a top priority. -Greg From: apian@ise.fhg.de Subject: Re: brtf files and glow light sources To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Mon, 3 Apr 1995 22:30:16 +0100 (MESZ) Hi Greg, thanks for help - > This is confusing and you will be disappointed to hear that the problem is thought so - :-) How is rtraced used by mkillum ? I guessed that mkillum samples the entry side by shooting rays with rtrace in random directions and multiplies them with the BRTF and cosine-whatnot. This would imply that it doesn't matter wether its a light or glow source. Which in fact does matter and stirs the questions what the hell happens with rtrace in mkillum ? > Others who have sampled arbitrary functions have used a uniform sampling over others with Radiance or other light programs ? What do people when they want to calculate daylight factors with BRTF materials ? Setting the sky full of small light sources to average would be one way, is it the only one ? Many thanks Peter From greg Mon Apr 3 13:46:05 1995 Return-Path: Date: Mon, 3 Apr 95 13:45:44 PDT From: greg (Gregory J. Ward) To: apian@ise.fhg.de Subject: Re: brtf files and glow light sources Mkillum does use rtrace to sample the illum surfaces, but does not know about BRTF's or any of that. Rtrace does everything, returning the radiance leaving the illum surface (or virtual surface) by the same calculations as rpict uses. Light vs. glow makes the same difference therefore to mkillum as it does to rpict. Mkillum does NOT compute distributions directly from light sources, since that would constitute overcounting if you then applied this as an illum in rpict. It uses the -dv- option to rpict to make light sources appear as blackened areas. I don't know how others calculate illumination through BTDF windows and such. My comment about weighting uniform samples according to the scattering function was related to papers other researchers have published, not even working programs. -Greg Date: Wed, 05 Apr 1995 09:18:08 -0700 (MST) From: "vivek@asu.edu" Subject: Modelling glass in Radiance To: Greg Ward Hi Greg, I have a question regarding the way glass is modelled in Radiance. I am trying to model an Azurlite glass with a Transmittance (visible), as per the manufacturer catalogue = 0.63. Should all three values (RGB) be set equal to 0.63 ? OR - should the value of the B component be set at a higher percentage to account for the blueish tint of the actuall glass color. Also, is it a good enough approximation to assume that the ultra voilet trasmittance of the glass is close to the Blue component and Infra red transmission close to the Red component of the glass transmittance (since these are the values usually provided in the maufacturer's product catalogue) ? Any sugestions / comments would be greatly appreciated . Thanks -Vivek Date: Wed, 5 Apr 95 10:46:04 PDT From: greg (Gregory J. Ward) To: MITTAL@ASU.EDU Subject: Re: Modelling glass in Radiance Hi Vivek, The standard glass material in Radiance takes the transmission at normal incidence, not the transmittance. The difference is that transmission doesn't account for external or internal reflections at the interfaces, thus its range of possible values is exactly 0-1, not 0-something depending on the index of refraction. I endeavored to make all the ranges of parameters in Radiance such that the physically valid range was evident. This led to some rather bizzarre specifications, to wit the "trans" parameter settings. Borrowing from the reference manual, you can compute the transmission from the transmittance for standard glass (n = 1.52) with the formula: tn = (sqrt(.8402528435+.0072522239*Tn*Tn)-.9166530661)/.0036261119/Tn For your transmittance (Tn) value of 0.63, that comes to a transmission (tn) of .687. If you have different transmittance values for each of red, green and blue, you should run this computation once for each component. All of our window experts are out of town this week, and I'm not personally familiar with the Azurlite glazing you speak of, nor do I have any advice regarding how to specify red and blue components from IR and UV values. This seems like a reasonable thing to do, but glass transparency often changes dramatically outside the visible range. Also, I'm not sure if Azurlite is a coated glazing, i.e. has different reflectance properties from one side as opposed to the other. If so, then you might be better off using the BRTDfunc type and the specification in the "glazing.cal" file in the standard Radiance library to model its properties. Good luck! -Greg Date: Fri, 14 Apr 95 12:52:41 -0600 From: Tarn Burton To: GJWard@lbl.gov Subject: BRTDfunc I do I use BRTDfunc to emulate metal or plastic. I want to switch between the two based on object color. Tarn Date: Fri, 14 Apr 95 18:20:31 PDT From: greg (Gregory J. Ward) To: user1417@vis.colostate.edu Subject: Re: BRTDfunc Hi Tarn, You cannot exactly emulate the built in Radiance metal or plastic types with BRTDfunc, unless the roughness parameter is always zero. This is because the BRTDfunc type does not sample the rough specular component the way metal and plastic do. What is your reason for wanting to do this -- could you explain better the effect or process you wish to simulate? -Greg From: user1417@VIS.ColoState.EDU (CVL User Tarn Burton) Subject: Re: BRTDfunc To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Fri, 14 Apr 1995 21:28:38 -0600 (MDT) I want to simulate an object with an inlaid metal pattern. If the point of intersection is gold colored than use the metal material, otherwise use plastic. Mixfunc occured to me, or mixdata, but they cannot access an existing color. Any suggesting woul d be great. Tarn Date: Sat, 15 Apr 95 08:22:40 PDT From: greg (Gregory J. Ward) To: user1417@VIS.ColoState.EDU Subject: Re: BRTDfunc You should be able to feed your pattern to mixdata or mixfunc as well as the materials it refers to. Can you be more specific still -- tell me what input you are using for the inlays. -G From: user1417@VIS.ColoState.EDU (CVL User Tarn Burton) Subject: Mixed materials To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Sat, 15 Apr 1995 12:56:14 -0600 (MDT) I'm using colorpict to map an image onto a cylinder. I would like to avoid creating another image to use as a mask since the one that I am using is sort of hefty. (20 megs) My initial idea was to use mixfunc to look at color (CrP,CgP,CbP) and decide wh ich material to used based on which color the point of intersection was. Since I want to make all the bronze colored parts use the bronze material. But mixfunc cannot be based on a material if the modifiers that it is mixing are materials. I would use mixdata but I don't know how to convert and image to the data file format. That is why I though of using BRTDfunc, since it could access the CxP vari ables. I'm not using any roughness values right now and would be willing to give up this feature if BRTD is the only way to go. Sorry about not explaining very well. Tarn Date: Mon, 17 Apr 95 13:23:35 PDT From: greg (Gregory J. Ward) To: user1417@VIS.ColoState.EDU Subject: Re: Mixed materials Hi Tarn, OK, at last I feel like I understand your predicament. The only real difference between metal and plastic in Radiance is the specular color, which you can influence with the BRTDfunc type. However, another usual difference between metal and plastic is that metal has a higher specular component relative to diffuse, and the BRTDfunc type does not let you alter the diffuse coefficient, although you can change the specular one. To approximate the difference using the BRTDfunc type, your file will look something like this: void colorpict mypicture 7 noop noop noop mypicture.pic mypicture.cal u v 0 0 mypicture BRTDfunc mypictmat 10 myred mygreen myblue 0 0 0 0 0 0 mypicture.cal 0 9 .5 .5 .5 .5 .5 .5 0 0 0 Here, I have chosen the value of (.5) for the average diffuse reflectance, but you can choose whatever you like. (The value will get modified by the colorpict pattern.) In the file "mypicture.cal" will be the definitions for u, v, myred, mygreen and myblue. U and v are the coordinate mappings for the image, which I presume you have worked out already. Myred, mygreen and myblue will look something like this: myred = if(ismetal, .4*CrP, .05); mygreen = if(ismetal, .4*CgP, .05); myblue = if(ismetal, .4*CbP, .05); ismetal = {expression greater than zero if pattern is bronze}; The problem here is that the degree of specularity doesn't change properly. So, using the mixdata type is a better solution, and you can reduce your original picture down in size and convert to the data type using pfilt and pvalue (and an editor). For example, you might use: % pfilt -x /4 -y /4 mypicture.pic \ | pvalue -H -h -d > mypict.dat \ | rcalc -e '$1=.6*$1+.1*$2+.05*$3' > mypict.dat % vi mypict.dat The rcalc expression is supposed to compute the "bronzeness" of the pixels, though I don't know if I picked the best coefficients to do this. Anyway, you must edit the ASCII file "mypict.dat" so that the first few lines read: 2 max(yres/xres,1) 0 yres/4 0 max(xres/yres,1) xres/4 Where xres and yres are the original X and Y dimensions of your image. This makes the (u,v) coordinates of the mixdata exactly match the (u,v) coordinates of your colorpict pattern. (Note that I want you to work out these expressions and write in the numbers, not the expressions themselves!) You can then apply this in a mixdata primitive like so: void colorpict mypicture 7 noop noop noop mypicture.pic mypicture.cal u v 0 0 mypicture metal bronze_part 0 0 5 .7 .7 .7 .8 0 mypicture plastic wood_part 0 0 5 .7 .7 .7 .04 0 void mixdata my_material 5 bronze_part wood_part coef mypict.dat mypicture.cal u v 0 0 The function "coef" should be defined in mypicture.cal to compute the coefficient (between 0 and 1) from the data value in your "mypict.dat" file. I don't know how you want to do this, but you can experiment with different mappings. I hope this long-winded answer is of some help. -Greg From: "CVL User Tarn Burton" Date: Tue, 18 Apr 1995 22:15:35 -0600 To: greg@hobbes.lbl.gov Subject: mixfunc xform Thanks for the help on mixfunc. I've got the mapping worked out, just the nit picky details now. Aside from this, something else has come up. If I use mixfunc in a rad file that is xformed into a another file multiple times all the previous versions of the object will use the most recent xform. This seems to apply to mixdata only, and other materials, such as colorpict seem to work. I solved this problem for the time being by using instance instead. Unless I am doing something wrong (which I don't think so since everything else xforms okay) than this might be a bug. Thanks again for the help, it wasn't too verbose. Most people hardly even respond so it is refreshing to get a complete answer to a question. Tarn Date: Wed, 19 Apr 95 16:35:47 PDT From: greg (Gregory J. Ward) To: user1417@VIS.ColoState.EDU Subject: Re: mixfunc xform Hi Tarn, I actually know about this "bug" but there's not much I can do about it. The same problem happens with antimatter and it's caused not by the wrong transform being applied but by the wrong material being looked up. You see, unlike normal Radiance modifiers, the material or modifier names given as string arguments are looked up at the time of their application, i.e. when a pixel is being rendered, rather than as the file is being read in. Thus, the final definition is the one that always gets used, whether or not it preceded the referring primitive. A simple example is given below: void plastic plas1 0 0 5 .5 .3 .7 .05 .02 void metal met1 0 0 5 .3 .4 .6 .9 .02 void mixfunc mix1 4 plas1 met1 Dz . 0 0 void plastic plas1 0 0 5 .2 .1 .3 .0 .0 mix1 polygon f1 0 0 9 0 0 0 0 1 0 1 0 0 Which definition of "plas1" will apply to the polygon "f1" modified by "mix1"? The answer is, the last one. If "plas1" had modified a primitive instead of being referred to in its string arguments, then the first one would have applied. However, since the modifiers named in the string arguments are dereferenced during rendering and not before, the final definition of "plas1" is the one that holds. I agree that this is a shortcoming, but one that cannot be easily overcome in the current implementation. It's best just to be aware of it, and name your modifiers uniquely to avoid the problem. (This is mentioned in the reference manual under mixfunc.) Putting things in instances is as you say another way to solve the problem. Clever of you to think of that -- I didn't! -Greg ======================================================================= GEOMETRIC MODELERS From: Jeremy Subject: modellor To: greg@hobbes.lbl.gov Date: Wed, 22 Mar 95 18:56:07 WST Hi there Greg, Sorry to have to annoy you personally, but the situation here is getting quite desperate!. Firstly, my name is Jeremy and I'm a 3rd year Fine Arts Student here at CURTIN Uni in Perth Australia. I am no elect. engineer and have to muddle my way through a lot of things, but.. We have RADIANCE 2.4 installed here and it runs well. We're running it on 2 HP 9000's. One with 16 and the other 32 RAM. We have no trouble with the software at all, it's very nice :). But my problem is I have no modellor complex enough to do what I'd like, with an easy to use interface.(or at least not completely text based.) We have Mac's here. And a baby pc, which I don't touch. I have looked at rendermanCad for Povray and then thought I'd convert objects into Radiance, but the prog is too simple. I have shown Simon (crone) the output from Infin-D on the Mac, which we have. .. And it's nothing like a true .dxf file. Complete garble. We can't seem to find a decent modellor for the HP's, which we would prefer, as they steamroll a Mac, even on a cloudy day. We don't have any CAD progs for the workstations. I haven't seen a great deal of organic things done with RADIANCE, in fact I haven't seen any. I know it's an architectural modellor, but it's the software I chose after looking at the range available. I know I have the ideas..but at the moment, not the means. As a side note, I also use WAVEFRONT Personal Visualizer 2.11. An absolute pig of a program, but it has a great scene creation area. Unfortunately, I can't seem to convert it's output into Radiance either. It produces .geo files. So, do you have any ideas at all? Have you heard of, or used Visualizer? I'm starting to see .rad files in my sleep. :) Any help would be appreciated very much. Jeremy Burton - Graphic Design, I.M.A.G.E. Technology. Dept. Electronic Engineering, CURTIN University of Technology. Perth Western Australia, Australia. jeremy@picasso.ece.curtin.edu.au Date: Wed, 22 Mar 95 18:14:37 PST From: greg (Gregory J. Ward) To: jeremy@picasso.ece.curtin.edu.au Subject: Re: modellor Hi Jeremy, For the Mac, there is a free program called Vision3D which you can pick up from our ftp site (hobbes.lbl.gov) in the /pub/mac directory. It is very good, though I'm not sure how well you can use it to create free-form shapes. That's a hard problem, and the one time I did it myself, I used a NURBS editor written by a friend as a demo for SGI's. I don't know if I could get a working version of it for you, but I'll try if you ask me nicely. Can you write out .OBJ files from Wavefront? I recently wrote an obj2rad converter. It's not very sophisticated, and doesn't know how to handle parametric surfaces, so you will have to tesselate them beforehand if you can. There is also a translator for StrataStudio in the /pub/translators directory on hobbes. Let me know if any of this is any use to you. I appreciate your efforts, and wish you luck. It isn't easy, I know. If there is a free or shareware modeller you really like, you might want to bug the author to add Radiance support. It really is pretty easy to do. -Greg ======================================================================= LENSES From: Cameron Meadors Subject: effects of lenses To: GJWard@lbl.gov Date: Fri, 24 Mar 1995 13:16:30 -0500 (EST) Greg Ward, I have been using Radiance 2.4 compiled on Linux long enough to get into some pretty intense models. Question: How realistically can lenses be modeled? I have read the the discussions in comp.graphics.raytracing, but I haven't found a definite answer. Looking into a lens is fine, but what about the light refracted through a lens and projected on another surface. I believe I read one of your posts describing Radiance as using raytracing and radiosity techniques. Could you explain the limits of lenses and how I can demonstrate the effects? Thank you in advance. -- Cameron Meadors Mechanical Engineering '97 cameron@syzygy.res.wpi.edu Worcester Polytechnic Institute, MA Date: Fri, 24 Mar 95 10:26:12 PST From: greg (Gregory J. Ward) To: cameron@syzygy.res.wpi.edu Subject: Re: effects of lenses Hi Cameron, As is true with most light-backwards ray-tracing algorithms, Radiance is not well-suited to modeling light as passed through lenses onto other surfaces. To do this effectively, you really need a forward or bidirectional ray-tracing method. Unfortunately, I know of no ray tracer distributed over the internet that includes this capability, as it is of limited use in most environments. Nevertheless, you may be able to see what you're after in Radiance, by decreasing the -aa value and increasing -ad and -as (with -ab 1 or higher), and modeling the light source(s) with the glow type with a radius of zero. This relies on finding the light sources with the "indirect" calculation, which won't work very well for small light sources. Therefore, your sources should be reasonably large or you will have to use preposterous settings for the -ad parameter. If you just stick with the default Radiance direct calculation, and your light source is an intermediate size, you will get a shadow effect due to lens refraction that, although not strictly correct, may be a reasonable approximation of the illuminated area. This is due to the fact that shadow rays are refracted through a dielectric medium in Radiance as a cheap, halfway solution. Hope this helps. -Greg Date: Fri, 24 Mar 95 13:34:02 -0800 From: greg@pink.lbl.gov (Gregory J. Ward) To: greg@hobbes, cameron@syzygy.res.wpi.edu Subject: Re: POV: Lenses? - comp.graphics.raytracing #12136 Hi again. Coincidentally, I came across this lucid description of the problem from Jason Black in comp.graphics.raytracing posted today. In article <3ksqfp$gc7@nntp3.u.washington.edu>, cloister@u.washington.edu (cloister bell) writes: damianf@wpi.edu (Damian Frank) writes: >Sorry if this is a dead topic around here, but it wasn't in the FAQ. Is it >possible to model lenses using PoV? I've tried, by shining a point light >through the intersection of two spheres (to produce a convex lens) but it >doesn't seem to alter the light at all. This might be because I know >nothing about lenses, but I expected _something_ to happen. Does anyone >know about this? improper treatment of lenses is one of the flaws of backward ray tracing in general. if you trace rays from the eye out into the world, they will pass through lenses in the proper way. So as long as you're _looking_ through the lens it'll be fine. however, if you're trying to use the lens to focus light from a light source, backward ray tracing is doomed to fail. what's going on in the real world is that the lens collects light over its surface and concentrates (or otherwise redistributes) it onto other surfaces. you can try and model this two ways. one, you can trace rays forwards from the light sources to see where they land and then gradually build up an illumination map for the scene that way. The other method is to try and solve analytically how the lens will redistribute light into the scene and then treat the lens more or less as a virtual light source of its own (albeit with some special properties than normal lights). forward ray tracing, of course, requires you to cast a whole lot more rays than you need to actually illuminate the part of the scene that you're looking at, but on the other hand, you only have to do it once. In those respects it's a lot like radiosity, except that forward ray tracing is stochastic whereas radiosity is analytic which makes radiosity less computationally expensive than forward ray tracing. the analytic solution for general lenses is just plain hard on a mathematical level. however, progress is being made in this area, so i wouldn't be surprised to see this sort of feature show up in packages like radiance and maybe even pov in the next couple of years. -- +-------------------------------------------------+---------------------------+ |tactical nuclear sdi stealth nsafood signature. | cloister@u.washington.edu | +-------------------------------------------------+---------------------------+ From: Cameron Meadors Subject: Re: POV: Lenses? - comp.graphics.raytracing #12136 To: greg@pink.lbl.gov (Gregory J. Ward) Date: Fri, 24 Mar 1995 19:48:41 -0500 (EST) > > Hi again. Coincidentally, I came across this lucid description of > the problem from Jason Black in comp.graphics.raytracing posted today. > > In article <3ksqfp$gc7@nntp3.u.washington.edu>, cloister@u.washington.edu (cloister bell) writes: > |> damianf@wpi.edu (Damian Frank) writes: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is humorous. This is my roommate. This post is what sparked this whole discussion :) Thanks anyway. -- Cameron Meadors Mechanical Engineering '97 cameron@syzygy.res.wpi.edu Worcester Polytechnic Institute, MA ======================================================================= HERMITE CUBIC FUNCTIONS From: Cameron Meadors Subject: Hermite eq's To: GJWard@lbl.gov Date: Fri, 31 Mar 1995 14:32:52 -0500 (EST) Greg, I have been playing with complex surfaces in Radiance and I noticed that you use the hermite function frequently. I have not been able to find anything on hermite polynomials except for a very generic form. Is there something unique about them that makes them nice for parametric definitions of surfaces? Can you give me some titles of books that explain them? Thanks in advance. -- Cameron Meadors Mechanical Engineering '97 cameron@syzygy.res.wpi.edu Worcester Polytechnic Institute, MA Date: Fri, 31 Mar 95 13:12:00 PST From: greg (Gregory J. Ward) To: cameron@syzygy.res.wpi.edu Subject: Re: Hermite eq's Hi Cameron, I like Hermite cubics myself simply because I understand them. They are defined by their endpoints and slopes (or "velocities") at the endpoints. It's relatively easy to build things using Hermite cubics and nothing else. Any basic computer graphics text, like Foley, van Dam, Feiner and Hughs (Addison Wesley), will explain this and other cubic forms. The basic explanation I would give is that a Hermite curve looks like so: _ r0 _ r1 /| _/| / _/ / / / _----__ _-O p1 / _- --___- / _/ /_/ // O p0 (Excuse my bad ASCII drawing.) The idea is that r0 determines the slope at p0 and r1 determines the slope at p1. The length of the vectors r0 and r1 also determine how much the curve is "pulled" in the given direction. A zero length for r1 would mean that the curve just meanders over to p1. A long length (i.e. a large velocity) means that the curve is really zinging at p1, so it may cause it to take a larger bend getting there to smooth out the turn. I wish I could show you this in animation, but you'll just have to play with it to see what I mean. -Greg ======================================================================= AMBIENT BOUNCES From: manuel@ise.fhg.de Subject: Ambient bounces To: greg@hobbes.lbl.gov Date: Wed, 19 Apr 1995 16:35:09 +0100 (MESZ) Hi Greg, some questions concerning "ambient bounces": We modelled a simple office room with a window facing WSW. Walls are plastic .7 .7 .7 . Sunny sky. At a point in the center of the room, we started rtrace -I, with several configurations in the parameters: def = default rtrace parameter settings OPT = -ad 256 -as 128 -aa .15 -ar 108 (as in dayfact) Now, varying the ab parameter, we get (in the R channel) def OPT ab 0 0 0 ab 1 0 1.963 ab 2 4.703 5.804 ab 3 7.305 8.361 ab 4 8.708 9.885 ab 5 10.12 10.45 ab 6 10.74 11.09 ab 7 10.08 11.05 ab 8 same as ab 7 - ab 9 same as ab 7 - Could you tell me, please, why: A) ab 6 > ab 7 B) ab 7 = ab 8 = ab 9 (at least in the configuration def) C) Are A) and B) physics or RADIANCE features? D) Are these differences (eg, ab_4 nearly 2*ab_2) reasonable? As I understand the concept of ambient bounces, with every additional bounce from the walls we get only 70% of the reflected light ( as the walls are plastic .7 .7 .7). So, after 4 reflections you get only a fraction of .7^4 = .24 of the initially incoming light. Does an additional ambient bounce collect so much light that the weakening of the light intensity (one additional reflection) is more than counterbalanced by the additional amount of light collected? E) (Do you understand question D) ????) Thank you very much in advance! Manuel ------------------------------------------------------------------- Manuel Goller Fraunhofer Institute for Solar Energy Systems Simulation Group, A608 Oltmannsstr. 5 D - 79100 Freiburg GERMANY Phone: ++49 - (0) 761 - 4588 - 296 Fax: ++49 - (0) 761 - 4588 - 132 Email: manuel@ise.fhg.de ------------------------------------------------------------------- Date: Wed, 19 Apr 95 16:50:33 PDT From: greg (Gregory J. Ward) To: manuel@ise.fhg.de Subject: Re: Ambient bounces Hi Manuel, So many questions! I'll try to answer them, but I don't know if I can to your satisfaction. Here goes: >Could you tell me, please, why: > > A) ab 6 > ab 7 > I would attribute this to random errors. Different rays are traced on different runs, and they are distributed using Monte Carlo, so some noise in the calculation is normal. The reason the values were monotonically increasing before was that you were accumulating indirect, which you wouldn't be doing if you had set your -av value correctly. (I.e. not left it as 0.) Towards the end, the delta changes are getting small, and are overwhelmed by noise. > B) ab 7 = ab 8 = ab 9 (at least in the configuration def) > Radiance decreases the number of secondary rays at higher reflection levels by a factor of two. After 7 bounces, the number of rays sent is the -ad parameter over 2^7, or two in this case. (This gets demoted to 0 because two is not enough rays to even sample anything.) > C) Are A) and B) physics or RADIANCE features? > They are Radiance "features". > D) Are these differences (eg, ab_4 nearly 2*ab_2) reasonable? > As I understand the concept of ambient bounces, with every > additional bounce from the walls we get only 70% of the > reflected light ( as the walls are plastic .7 .7 .7). > So, after 4 reflections you get only a fraction of .7^4 = .24 > of the initially incoming light. > > Does an additional ambient bounce collect so much light that > the weakening of the light intensity (one additional reflection) > is more than counterbalanced by the additional amount of light > collected? > I'm a bit puzzled myself why you get a value of 0 for the default parameters and 1 ambient bounce. This would seem to indicate that 128 samples is not enough to even find the window, which means that the subsequent default calculations are not even worth looking at. Your crude zonal approximation does not account for the distribution of light in the space. You have to think about light coming from the sky, as well as direct light landing on the floor, bouncing then bouncing again off the ceiling before finally reaching your illuminance point. Then, I can sort of see how it might make sense. > E) (Do you understand question D) ????) > I don't know, did I answer it? -Greg ======================================================================= BUMP MAPS From: "CVL User Tarn Burton" Date: Wed, 19 Apr 1995 21:01:31 -0600 To: greg@hobbes.lbl.gov Subject: Bump Map Is there a program that I can use to convert a data file (or picture) that represents a displacement map to the three bump map files that I need for texdata? The displacement map is just an array of heights in the Z vector. With X and Y being represented by the position in the array. Thanks, Tarn Sorry to bug you so much. Date: Wed, 19 Apr 95 20:39:44 PDT From: greg (Gregory J. Ward) To: user1417@VIS.ColoState.EDU Subject: Re: Bump Map Hi Tarn, Unfortunately, an array of heights does not a Radiance texture make. In fact, there is no natural way to relate surface heights to surface normals, and it's a mystery to me how other rendering systems do it. I suppose they use some sort of fitting function and take its slope, but the choice of function wholly determines the slope, and the choice is completely arbitrary! I'm sorry that I can't answer your question in this case, but I simply don't know a good answer. Your guess is as good as mine. -Greg From: "CVL User Tarn Burton" Date: Fri, 21 Apr 1995 11:19:27 -0600 To: greg@hobbes.lbl.gov Subject: Bump map Here is a program to do the bump map conversion. It's pretty primitive and I will probably work on it some more in the future, but maybe you can get something out of it now. It takes an array of heights from stdin and four args on the command line, the first being the array width the the others being the names of X, Y, and Z data files to write to. As for how it does the actual calculations, it really is fairly simple. For each point it fits a spline function onto the points in the vertical direction and the horizontial direction. The derivitive of the these functions represents dY and dZ since the array is oriented so the heights point in the positive X direction. The cross product of these two vectors results in the surface normal. The program doesn't do any scaling or other fancy things, maybe I'll add this later, right now I just wanted a quick fix. Tarn --------------------------------- #include #include #include FILE *Xfile,*Yfile,*Zfile; void Cross(double dY,double dZ) { double divisor=sqrt(dY*dY+dZ*dZ+1.0); fprintf(Xfile,"%g\n",1.0/divisor); fprintf(Yfile,"%g\n",-dY/divisor); fprintf(Zfile,"%g\n",-dZ/divisor); } static double coeff[4][4]= {{-4.0,9.5,-8.0,2.5}, {-0.5,0.0,0.5,0.0}, {0.0,-0.5,0.0,0.5}, {-2.5,8.0,-9.5,4.0}}; double Spline(int i,double x0,double x1,double x2,double x3) { return (coeff[i][0]*x0+coeff[i][1]*x1+coeff[i][2]*x2+coeff[i][3]*x3); } int Width; double *X[4]; void CalcRow(int i) { int j; Cross(Spline(i,X[0][0],X[1][0],X[2][0],X[3][0]),Spline(0,X[i][0],X[i][1],X[i][2],X[i][3])); for (j=3;j and say you want on it. -Greg From: hwj@henrik.igk.dth.dk Subject: Re: A few academic ideas... To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Tue, 22 Nov 1994 01:23:17 +0100 (GMT+0100) Hi Greg, I can see the problem of collecting material-samples. I would however also think that measuring and fitting these materials would take a lot of time. I could see if I could get my hands on some materials like: Bricks, pieces of wall-paper, different types/colours of paint, plants? (grass, leaves, bark...) Of course a number of these would have to be accompanied by some texture mapping. Plants would be quite difficult to sent - I guess the reflectance properties would change during air-mail ;-) I wouldn't mind collecting a material database. I think the work of cataloging the materials could simply be done (initially) using a text-file containing names of materials and the corresponding reflectance model and measured data - just like the tables you presented in your paper at siggraph 92. It surprises me that no one else has announced implementations of the irradiance gradient method. I am very pleased with it. I started initially with plain recursive Monte Carlo sampling which took years (almost) to complete. Then I found your article on the store and reuse technique in comp. graph. 1988 which I implemented and it was a large improvement (It did however eat up a bit of memory). A few month later I found your article on the irradiance gradient method which was again a large improvement and it gave very good results. - Henrik From: hwj@henrik.igk.dth.dk Subject: Re: A few academic ideas... To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Tue, 22 Nov 1994 22:32:03 +0100 (GMT+0100) Hi Greg, It could indeed be very interesting to arrange a sample drive. I can think about a number of materials that could be quite interesting to measure: Bricks, different types/colours of wall-paper, different types/colours of paint, plants (I don't know whether they fit into the model). I wouldn't mind cataloging these materials. I think it can be done (initially) simply by adding information to a text-file regarding the material name and the measured reflectance data. It surprises me that no one else has announced implementations of the irradiance gradient method. It has been very useful to me. I started initially with plain Monte Carlo sampling of the indirect light (it took years to complete). Then I found your article in comp. graph. 88 concerning storing and reusing the indirect light which I implemented with very good results. Later I found your 92 article with the gradient technique and it gave even better results. I think that for instance Chen in comp. graph. 91 could benifit from the irradiance gradients method instead of using path tracing in the visualization step. I have read in your articles that the results in Radiance are verified from time to time (and they improve). How is this verification done. Is it via a simple model that can be handled with exact calculations or is it via a brute force Monte Carlo technique? - Henrik Date: Tue, 22 Nov 94 11:14:14 PST From: greg (Gregory J. Ward) To: hwj@henrik.igk.dth.dk Subject: Re: A few academic ideas... Hi Henrik, Sorry not to respond to your last message, but there are so many problems with creating a material database that I haven't solved yet, that I don't even want to think about it right now. (Material textures and colors are BIG problems.) Radiance has been compared to measurements of scale models, other lighting simulations and measurements and qualitative comparisons of full-sized environments. Most of the good validation work has been done outside of LBL. -Greg From: hwj@henrik.igk.dth.dk Subject: Re: A few academic ideas... To: greg@hobbes.lbl.gov (Gregory J. Ward) Date: Tue, 22 Nov 1994 20:24:10 +0100 (GMT+0100) Hi Greg, > Sorry not to respond to your last message, but there are so many problems Sorry for resending it. But elm crashed just after sending the mail and I was wondering whether it was sent at all. > with creating a material database that I haven't solved yet, that I don't > even want to think about it right now. (Material textures and colors are > BIG problems.) That is true, but as a beginning textures could be ignored. Why are colours problematic? > > Radiance has been compared to measurements of scale models, other lighting > simulations and measurements and qualitative comparisons of full-sized > environments. Most of the good validation work has been done outside of LBL. Are these results somehow available? I would be very interested in validating my own little rendering-program.... - Henrik Date: Tue, 22 Nov 94 13:54:25 PST From: greg (Gregory J. Ward) To: hwj@henrik.igk.dth.dk Subject: Re: A few academic ideas... Hi Henrik, The problem with colors and textures is that the device I currently have can neither measure nor ignore them. They interfere with the measurements, especially surfaces with visible textures or patterns. There are a couple of reports with Radiance validation data in them. One is by John Mardaljevic: Mardaljevic, John, K.J. Lomas, D.G. Henderson, ``Advanced Daylighting Design for Complex Spaces'' Proceedings of CLIMA 2000, 1-3 November 1993, London UK. Two others come from LBL: Grynberg, Anat, Validation of Radiance, LBID 1575, LBL Technical Information Department, Lawrence Berkeley Laboratory, Berkeley, California, July 1989. Papamichael, Kostantinos, Liliana Beltran, ``Simulating the Daylight Performance of Fenestration Systems and Spaces of Arbitrary Complexity: The IDC Method'' "Proceedings of Building Simulation `93", Adelaide, Australia, August 16-18, 1993. Don't ask me to send Anat's to you. It's rather long, and the color images don't copy well. These papers are not the best resource if what you want to do is validate your own software, however. You would be better off using the simulations that lie in the /pub/tests directory on hobbes.lbl.gov. That's what they're there for. -Greg Dear Radiance Users, Here is a somewhat overdue installment of the Radiance Digest. (Well, technically it's not overdue since there's no schedule, but it's kind of long so I hope I don't choke your mailers or use up what little disk space you had left.) In this issue, the following topics are addressed: ILLUMINATION MAPS - Applying Radiance results to polygons REFLECTION MODEL - Understanding Radiance's reflection model DESIGN WORKSHOP - Using DesignWorkshop as input to Radiance MODELING STARS - Modeling a starry sky SAVING UNFILTERED PICTURE WITH RAD - Tricks and changes DEBUGGING FUNCTION FILES - How to debug .cal files ANIMATION FEATURES (LACKING) - What Radiance can and cannot do ANTIMATTER MODIFIER LOOPS - "possible modifier loop" error MODELING SEMITRANSPARENT LIGHT SHELF - Using prism and mirror types MATERIAL MIXTURES - Modeling inhomogeneous materials RADIANCE VS. POV-RAY - Detailed comparison between these packages PHOTOMETRIC UNITS AND COLOR - Converting to and from lighting units As usual, if you got this mailing against your wishes, send a message saying something like "I wish to unsubscribe from the Radiance digest list" to: radiance-request@hobbes.lbl.gov Please DO NOT REPLY to this message! If you want to write to me directly, send e-mail to and I will respond. For those of you who are new to the mailing list, I remind you that we have now a Web site with lots of goodies including indexed back issues at: http://radsite.lbl.gov/radiance/HOME.html We also have an informal discussion group going at . If you wish to subscribe to this group, please once again mail to with a short note saying that you "wish to subscribe to the Radiance discussion list." Due this BEFORE sending any e-mail to the list if you want to see the replies. All the best, -Greg ========================================================================== ILLUMINATION MAPS Date: Tue, 2 May 1995 12:26:00 -0400 From: seguin@vr1.engin.umich.edu (Ralph Seguin) To: GJWard@lbl.gov Subject: Radiance Cc: seguin@vr1.engin.umich.edu Hi. I work at the Virtual Reality Lab here at U Michigan. We are looking methods of producing rendered 3D scenes which we can move around in. Ie. We want to take a 3D scene, and radiosity render it and get colored polygons out (instead of a 2D projected image). Our questions are: Has anybody already done this with Radiance? How much work would be involved in changing it to do this? Is it worthwhile? What would be a good starting point? (I noticed that there was an awful lot of source there ;) Thanks, Ralph Date: Tue, 2 May 95 10:11:21 PDT From: greg (Gregory J. Ward) To: seguin@vr1.engin.umich.edu Subject: Re: Radiance Hi Ralph, I know some folks in Zurich have done this with Radiance, and I think some folks at NIST have as well. It's not too difficult, actually, and doesn't involve modifying any code so long as your VR engine can handle "textured" polygons. Ideally, you would combine an irradiance map computed by Radiance with a texture map containing the surface colors and variations on each polygon. This is most practical for large rectangular areas, but it can be done for smaller polygons as well, or you can use Radiance to compute the vertex irradiances (equivalent to radiosities) directly. In short, there are three methods available to you: 1) Compute surface radiances for each polygon, i.e: a) Use rpict or (perhaps better) rtrace to compute a 2-d image of the visible area of each polygon at high resolution. b) Apply these as texture maps during rendering. 2) Compute surface irradiances for each polygon, i.e: a) Use rpict -i or rtrace -i to compute a 2-d map of the visible area of each polygon at lower resolution. b) Apply these in combination with surface color or texture during rendering. 3) Compute vertex irradiances with rtrace -I and apply these in a gouraud shading step during rendering. (This probably requires meshing your scene, unlike the above two choices.) Option 2 is the one I would try first. It should be a fast computation, and puts the smallest strain on the rendering hardware if you have a lot of repetitive surface textures. If you don't have any surface textures to speak of, you might use option 1 instead and relax the resolution of your polygon images. The disadvantage of 1 is that it computes radiance, which is not the same as radiosity except for diffuse surfaces. (I.e. the highlights could be confused.) In either case, your shadow definition will depend on how fine you compute these poly images. Hope this helps. -Greg From: "Tim Burr" Date: Sat, 1 Jul 1995 12:29:10 -0700 To: greg@hobbes.lbl.gov Subject: Quick start w/DXF converter Greg, We're looking at purchasing LightScape here but I first wanted to give Radiance a look. I was pleased to see that a DXF converter was available as this is important to us. Can't seem to get it working though. Using the test.dxf file that came with the distribution (Siggraph 94 distribution). If I run dxfcvt -orad dxf -itest.dxf I get two files dxf.rad and dxf.mod. I then create a simple rad control file with just settings for the ZONE, EXPOSURE and the following setting for scene -> scene = dxf.rad dxf.mod. I name this rad file first.rad. When I run "rad first.rad" I get the output: oconv dxf.mod dxf.rad > first.oct oconv: fatal - (dxf.mod): undefined modifier "CIRCLE" rad: error generating octree first.oct removed I know I'm trying to evaluate this on a fasttrack here but if you could give me some help or point me to the right direction that would be most appreciated. I printed out the reference manual and tutorial and that didn't pop out the answer for me. Also looked in the digests and there was no detail info on using the DXF converter. Thanks, Tim -- Timothy Burr | "Essence is that about a thing that makes Coryphaeus Software, Inc. | that thing what it is." burr@cory.coryphaeus.com | - existentialist food for thought - Date: Sun, 2 Jul 95 11:13:38 PDT From: greg (Gregory J. Ward) To: burr@stobart.coryphaeus.com Subject: Re: Quick start w/DXF converter Hi Tim, The DXF converter is for version 10 (I think) and so may not work with the latest version of AutoCAD and other programs. People generally use the torad program from the pub/translators directory on hobbes.lbl.gov and run it as an export function from within AutoCAD. It sounds like you're isolated from the network, so you would have to send me a tape (DAT is fine) to get this program and the latest version of Radiance (2.5) which has a graphical user interface to "rad". Concerning your problem running dxfcvt, it sounds as though you put your modifier file after your scene file, i.e. "scene = dxf.rad dxf.mod" instead of how it needs to be: "scene = dxf.mod dxf.rad". Also, the modifier file created by dxfcvt is merely a template -- you must define the modifier values yourself using a text editor. For more detail, you should work through the Radiance tutorial (tutorial.1 in ray/doc) to gain some experience using the software. If you have the money to buy LightScape and an SGI with 128+ Mbytes of RAM, by all means please do so. It is a commercial product and has the support and work in the interface to make it both powerful and friendly. Radiance is free software and has many of the associated drawbacks -- limited support, documentation and interface. Ultimately, I still think you can create better simulations on more complicated models with Radiance than are possible with the current incarnation of LightScape, but I'm not entirely free of bias. Some of their output is stunning. Hope this helps. -Greg From: "Tim Burr" Date: Mon, 3 Jul 1995 10:22:37 -0700 To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: Quick start w/DXF converter I'm all for going with the free solution. I'm not sure about my colleagues here. I think it will depend mostly on the I/O conversion capabilities of the systems. We would need a decent DXF/OBJ in capability and a DXF/OBJ out (at least some way to get the texture coords out). Coryphaeus is a leading producer of 3D visual simulation modeling tools. We're looking at radiosity packages to compute really night lighting that we can then slap on as texture maps. Tim Date: Mon, 3 Jul 95 14:07:25 PDT From: greg (Gregory J. Ward) To: burr@stobart.coryphaeus.com Subject: Re: Quick start w/DXF converter Hi Tim, I think the I/O capability of Lightscape is superior to Radiance, particularly with regard to export. Although Radiance is capable of generating surface textures, there are no programs customized to this purpose, so getting what you want is going to be difficult. -Greg Date: Mon, 3 Jul 1995 15:42:45 -0700 To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: Quick start w/DXF converter Could you include me on the unmoderated list or send me a blurb on how to subscribe? Well maybe that's one area that we can contribute to Radiance. When you say that Radiance generates surface textures can you elaborate? How does it partition (or what control does it give you to), create surface texture maps from the lit scene? That is, how are you able to turn the per-vertex color values (that is the lighting info), into indices into a selected texture map within a set of generated texture maps that in total cover the scene? I understand that you're busy so if you're spending too much time on this let me know. Tim Date: Mon, 3 Jul 95 16:08:17 PDT From: greg (Gregory J. Ward) To: burr@stobart.coryphaeus.com Subject: Re: Quick start w/DXF converter Hi Tim, There are no per-vertex color values in Radiance unless you explicitly generate them by calling rtrace -I with the vertex positions and normals as input. The simplest way to generate illumination maps is by running rpict with the -i option and a parallel view that sees only the surface or surfaces you are generating a texture for. A more robust approach is to call rtrace as a subprocess and query the irradiance at points on all the surfaces, constructing a map on a coordinate system you select. If you can only handle polygonal data, you can convert the Radiance scene description into MGF then use the standard MGF parser (also included in 2.5) to extract a polygonal representation. You can then use this to select your computed points, and it shouldn't be too difficult. -Greg From: "Tim Burr" Date: Mon, 3 Jul 1995 16:55:46 -0700 To: greg@hobbes.lbl.gov (Gregory J. Ward) Subject: Re: Quick start w/DXF converter When you get to the polygonal representation do you still have access to the continuous irradiance signal? That is, isn't all you have at that point just the samples of the irradiance signal at the vertices? In what form can you carry the full irradiance info into this polygonal form? No one here has heard of MGF. Could you briefly describe. Tim Date: Mon, 3 Jul 95 18:07:57 PDT From: greg (Gregory J. Ward) To: burr@stobart.coryphaeus.com Subject: Re: Quick start w/DXF converter There is no system I know of, Radiance included, that maintains a continuous representation of irradiance. Radiosity systems usually employ a linear interpolation between vertices (Gouraud shading) and Radiance uses a more complicated scheme that is roughly equivalent to a third-degree function to interpolate values. Probably the easiest thing to do is to query values on a uv coordinate system of your own choosing and either interpolate values linearly or leave it to the hardware to deal with. I'm not surprised that you haven't heard of MGF, as it's only been out a few months and the only advertisement has been on comp.graphics. Our WWW site for MGF is http://radsite.lbl.gov/mgf/HOME.html -- it's basically a language for interchange of 3-d geometry and materials. I can put a copy of these web pages on the DAT as well if you can't get to our site. -Greg ========================================================================== REFLECTION MODEL Date: Wed, 3 May 1995 15:50:55 +0100 (GMT+0100) From: Henrik Wann Jensen To: GJWard@lbl.gov Subject: Anisotropic Reflection Hi Greg, Some time ago I implemented your anisotropic reflection model. I did however find that the anisotropic highlights were too bright. So I decided to look into your implementation in the Radiance program. It seems to me that you use 1/cos^2(delta) instead of tan^2(delta) in the formula? Furthermore you add a small fraction omega/(4*PI) if the reflecting surface is flat. I modified my code into using 1/cos^2(delta) and my highlights improved significantly since the argument to exp became more negative. Is 1/cos^2(delta) more correct or have I missed something? Thanks, Henrik Wann Jensen Date: Wed, 3 May 95 10:16:18 PDT From: greg (Gregory J. Ward) To: hwj@hwj.gk.dtu.dk Subject: Re: Anisotropic Reflection Hi Henrik, I'm glad to hear that my paper didn't go completely unnoticed! I can understand your difficulty interpreting my code -- I had trouble just now when I looked at it again myself. What's confusing is that since the paper was written, another fellow (Christophe Schlick) helped me to discover a more efficient vector formulation, so that what's in Radiance doesn't exactly match what's in the paper. The subtle thing you're probably missing in aniso.c is that the half-vector is not normalized, and in fact when you use this unnormalized vector in the way I have here, you do end up computing the tangent^2(delta). Thus, the code exactly matches Equation 5a in the paper with the exception of the omega/(4*PI) factor you discovered. That factor was put in as an approximation to the effect source size has on a highlight. It is not terribly accurate, and it approximates any shape source as a sort of blob, and extends the highlight to compensate. It is applied only to flat surfaces, since curved ones have an unknown effect on the highlight appearance, and at that point I give up. The proper thing to do of course is to send multiple samples to the light source and average them, which happens also in Radiance -- this just reduces noise associated with that process. I hope this has cleared up some of your questions. -Greg Date: Mon, 15 May 1995 19:43:56 +0200 () From: Maurice Bierhuizen To: "Gregory J. Ward" Subject: Re: green-house questions Hello Greg, Could you help me with some questions I have on simulating cloudy sky scenes in Radiance? Is it true that the only reflectioncomponent that's being considered in calculations with cloudy skies is the diffuse indirect component? If so: Is it true that all the -d* or -s* parameters in rtrace/rpict are of no importance with cloudy skies? And, are the specularity and roughness parameters for the material types metal and plastic of any importance with cloudy skies? According the reflection formulas for plastic and metal in 'Behaviour of materials in Radiance' the roughness is of no use under these conditions, unless it's zero. That's completely in accordance to what I see with my simulations. But the specularity parameter should have influence on the diffuse reflection. The thing is that I don't see it influence the reflection of objects unless it is maximum (=1), and it then becomes totally black. Another question I have is about the glass materialtype. I created a test scene with a cloudy sky (no direct sun) and a very large glass polygon with its normal pointing up in the sky (Z+ direction). When I measured irradiance (rtrace -I option) in points below and above the glass polygon in the Z+ direction I did not get any different irradiance values. Can you explain this to me, I think irradiance should be lower under the glass polygon, am I wrong in thinking that? Thank you in advance. Greetings, Maurice Bierhuizen. Date: Mon, 15 May 95 10:53:46 PDT From: greg (Gregory J. Ward) To: M.F.A.Bierhuizen@TWI.TUDelft.NL Subject: Re: green-house questions Hi Maurice, In fact, the specular component is computed for any type of environment, including cloudy skies. I don't know why your surface with 100% specular should appear black, unless you are using an old version of Radiance or you have set the -st value to 1. It is true that the sky does not participate in the direct portion of the calculation, so the -d* options have no effect, however the -s* options are still important. You are correct in expecting that your -I computation under the glass should be lower than your -I computation above the glass, unless you have specified a ground source in which case the reflection off the glass might compensate for the loss in the transmitted component. Another thing to be careful of is the setting of your -ar parameter, which could cause the calculation to reuse the above value for below simply because they are too close together for the program to distinguish them. You might try it again with a different setting of -ar, or turn off ambient interpolation altogether by setting -aa 0. Hope this helps. -Greg Date: Thu, 6 Apr 1995 11:33:57 +0200 (MET DST) From: Maus Subject: Anisotropic Materials To: Radiance Mailing List Hello there, I just read the SIGGRAPH '92 article 'Measuring and Modelling Anisotropic Reflection' by Greg Ward. At the end of this article is a table with some numbers for elliptical Gaussian fits for materials I would like to use. In this table for each material are specified: the diffuse reflection (rho_d), the specular reflection (rho_s), the RMS slope in the x direction (alpha_x) and the RMS slope in the y direction (alpha_y). Now my question is, how do I use them in Radiance? If I use the types plastic2 or metal2 I don't know where the diffuse reflection parameter should go. Should I use the BDRF type to write my own BDRF function? If neither of these is possible, does anyone know where to find this kind of accurate figures that can be used in Radiance. I'm particularly interested in aluminium, zinc, white paint, rubber and white PVC. Greetings, Maurice Bierhuizen. Date: Thu, 6 Apr 95 09:37:02 PDT From: greg (Gregory J. Ward) To: M.F.A.Bierhuizen@TWI.TUDelft.NL Subject: Re: Anisotropic Materials Hi Maurice, Applying the numbers in the '92 paper to Radiance is pretty straightforward, since the reflectance model used by plastic, metal, plastic2 and metal2 correspond to the isotropic and anisotropic Gaussian BRDF's presented in the paper. Let's take two examples, one for plastic and the other for metal. Since the paper does not give any spectral (i.e. color) measurements for the materials, we'll assume a uniform (grey) spectrum. For the plastic example, we'll use the isotropic "glossy grey paper". The parameters given in the article for this are rho_d=.29, rho_s=.083, alpha_x and alpha_y=.082. The Radiance primitive for this material would be: void plastic glossy_grey_paper 0 0 5 .32 .32 .32 .083 .082 The reason the diffuse color above is not (.29 .29 .29) is because this value gets multiplied by (1-.083) to arrive at the actual rho_d, so we had to divide our rho_d of .29 by (1-.083), which is (approximately) .32. For the metal example, we'll use the anisotropic "rolled aluminum", whose parameters from the article are rho_d=.1, rho_s=.21, alpha_x=.04, and alpha_y=.09. The Radiance primitive, assuming the material is brushed in the world z-direction (i.e. is rougher in the other directions), would be: void metal2 rolled_aluminum 4 0 0 1 . 0 6 .31 .31 .31 .68 .04 .09 The orientation vector given in the first line (0 0 1) is the world z-direction, which will correspond to the direction of the first roughness parameter, which is .04 in our example. It is important not to confuse the article's x and y directions with world coordinates, which may be different. If we wanted our surface orientation to change over space, we might have used a function file in place of the '.' above and given variables instead of constants for the orientation vector. Computing the diffuse and specular parameters for metal is even trickier than plastic. The formulas below apply for D and S, where D is the (uncolored) diffuse parameter and S is the specular parameter: D = rho_d + rho_s S = rho_s/(rho_d + rho_s) These formulae are needed because the specular component is multiplied by the material color for metals to get the actual rho_s. I hope this clarifies rather than muddies the waters.... -Greg ========================================================================== DESIGN WORKSHOP [The following is excerpted from somewhere on the net.] Learning Radiance, by Kevin Matthews (matthews@aaa.uoregon.edu) One way to smooth out the Radiance learning curve is to build models with DesignWorkshop, a commercial Macintosh-based 3D modeler (with a really nice live 3D interface). DesignWorkshop has a direct export function for Radiance that not only provides a geometry file, with sophisticated sky function all set up, etc., but also automatically creates a couple of shell scripts sufficient to completely automate your initial rendering of a view of a model. Of course, if you become a Radiance fiend you'll want to do more than the DW translation builds in, but even them it is a major time-saver and dumb-error preventer. ========================================================================== MODELING STARS Date: Mon, 19 Jun 1995 11:25:00 +0100 From: jromera@dolmen.tid.es (Juan Romera Arroyo) To: greg@hobbes.lbl.gov Subject: Questions Cc: Hi Greg, I'm trying to simulate eclipses and I'd like to use Radiance for this purpose. Is there a good way to do this with Radiance ? When I tried it the shadow seemed to be very big on the earth's surface and the penumbra not accurate. the setting for rpict was: -ps 1 -dj 0.5 -pj .9 .... Also I'd like to know what's the best way to simulate the sky at night (plenty of stars) using Radiance. I also tried with many glow sources (about 15,000) but it was quite slow. Maybe an easier way ? Thanks in advance. Best Regards Juan Romera (jromera@dolmen.tid.es) Date: Tue, 27 Jun 95 10:38:42 PDT From: greg (Gregory J. Ward) To: jromera@dolmen.tid.es Subject: Re: Questions Hi Juan, Exactly what kind of eclipse simulation are you after? I.e. what effects are you trying to reproduce? I can't answer your question otherwise. I would simulate stars as a pattern on a glow source covering the whole sky. Doing it as many individual glow sources is terribly inefficient as you discovered. I don't have a pattern for stars offhand, but you can try creating a .cal file something like so: { Star pattern (stars.cal) } FREQ : 1000; { affects number of stars } POWER : 40; { affects size of stars } staron = if(noise3(Dx*FREQ,Dy*FREQ,Dz*FREQ)^POWER - .95, 1, 0); ------------- Then, apply it to a glow like so: void brightfunc starpat 2 staron stars.cal 0 0 starpat glow starbright 0 0 4 1e6 1e6 1e6 -1 ------------------ You may have to play with the value of FREQ and POWER to get it to look right. -Greg ========================================================================== SAVING UNFILTERED PICTURE WITH RAD Date: Tue, 4 Jul 1995 12:47:02 +0300 (EET DST) From: Yezioro Abraham To: Greg Ward Subject: pfilt resolution Hi Greg. It's been almost a year since my last mail to you. I have a question: It is possible to avoid in the pfilt process (using rif files) the reduction of the picture according to the desired Quality? I want to save the analysis with the resolution it has before the pfilt process. I've tried writing in the rif file, in the pfilt option: pfilt= -x /1 -y /1, But when rpict is finished i get this: pfilt -r 1 -x /1 -y /1 -x /2 -y /2 tst_w2.raw > tst_pic Checking a very old Digest, from 92, you answer to Nikolaos Fotis, about the smoothness on an image: > In short, if a smooth image is more important to you than a correct one, you can take the original high-resolution image out of rpict, convert it to some 24-bit image type (like TIFF or TARGA), and read it into another program such as Adobe's Photoshop to perform the anti-aliasing on the clipped image. If you don't have Photoshop, then I can show you how to do it with pcomb, but it's much slower. < We are taking the images, after the pfilt, to photoshop, but something in the smoothness seems to be lost. So we want to check the possibility of don't filtering the image size. Any help will be appreciated. Thanks, Abraham ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Abraham Yezioro e-mail: array01@techunix.technion.ac.il Fax: 972-4-294617 Tel. office: 972-4-294013 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Tue, 4 Jul 95 12:48:13 PDT From: greg (Gregory J. Ward) To: array01@techunix.technion.ac.il Subject: Re: pfilt resolution Hi Abraham, The best way to avoid losing your original high-resolution rpict output is to create a link (I assume you are on UNIX) to the root_view.raw file. I.e. before running rad, execute: % touch root_vw1.raw % ln root_vw1.raw root_vw1.orig % touch root_vw2.raw % ln root_vw2.raw root_vw2.orig % rm root.oct % rad root.rif >& root.errs & Rad will still do all the same actions, but you won't lose your original picture files because you've created hard links to them. (Run "man ln" for details.) -Greg Date: Wed, 5 Jul 1995 08:54:18 +0300 (EET DST) From: Yezioro Abraham To: "Gregory J. Ward" Subject: Re: pfilt resolution Thanks Greg. It really works ... but if you don't mind i think that like in the rpict options you can overwrite the "defaults", so as in the pfilt options you should will. In the process of providing the TRAD tool to make easier the Radiance use to us all, will be convenient to "touch" the pfilt options (ragarding to the -r and -x -y options). Thanks again, and keep the good work you are doing, Abraham ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Abraham Yezioro e-mail: array01@techunix.technion.ac.il Fax: 972-4-294617 Tel. office: 972-4-294013 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Wed, 5 Jul 95 09:09:35 PDT From: greg (Gregory J. Ward) To: array01@techunix.technion.ac.il Subject: Re: pfilt resolution Hi Abraham, In fact, you can override all pfilt options EXCEPT -x and -y, since those would change the resulting output resolution and end up contradicting the RESOLUTION= variable setting. I did this intentionally so that would not happen. I would be willing to add a T/F variable called RAWSAVE that would prevent the raw picture files from being removed. Would this satisfy you? -Greg [P.S. to this message -- the next release of rad will include two new variables, RAWFILE and ZFILE, which allow you to save the original unfiltered picture and distance map.] ========================================================================== DEBUGGING FUNCTION FILES Date: Tue, 4 Jul 1995 13:19:46 +0100 From: pcc@dmu.ac.uk To: greg@hobbes.lbl.gov Subject: Function files Hi Greg, I'm trying to write a function file, and I'm having some difficulty. Are there any de-bugging aids, such as the ability to print out values ?. Regards Paul *************************************************** * Paul Cropper * * ECADAP Centre * * Institute of Energy and Sustainable Development * * De Montfort University * * The Gateway E-mail pcc@dmu.ac.uk * * Leicester Tel 0116 2577417 * * LE1 9BH Fax 0116 2577449 * *************************************************** Date: Tue, 4 Jul 95 12:52:25 PDT From: greg (Gregory J. Ward) To: pcc@dmu.ac.uk Subject: Re: Function files Hi Paul, There is a script called "debugcal", that allows you to take the output of ximage (i.e. the 't' command for tracing rays) and see what your cal file is computing. There's no man page for it, which is unfortunate, and I haven't used it much myself, but it might do the trick. Look at the source in ray/src/util/debugcal.csh to see what it does. It employs the rcalc program. -Greg ========================================================================== ANIMATION FEATURES (LACKING) Date: Sat, 22 Jul 1995 10:04:36 -0700 From: danj@netcom.com (Dan Janowski) To: GJWard@lbl.gov Subject: Rad Q's: Motion Blur, Splines & Cameras Just got Radiance, it's pretty neat. Thanks. I like what you have done. It feels good to use. A pleasant and direct scene description languange. I have a couple of questions about things that seem to be missing and am just curious about your thoughts on them. I apologize if I am asking for anything that is already possible, I may have not discovered the Radiance way of getting it done. So, here they are: Motion blur (Temporal aliasing) Spline surfaces A real-world camera definition with the camera defined in the scene language Animated transformations for objects defined in the description languange, i.e. a spline definition and being able to tag that to an object's transformation. Dan -- Dan Janowski Triskelion Systems danj@netcom.com New York, NY Date: Mon, 24 Jul 95 14:43:31 PDT From: greg (Gregory J. Ward) To: danj@netcom.com Subject: Re: Rad Q's: Motion Blur, Splines & Cameras Hi Dan, It is possible but not straightforward to do the things you want. To address each item specifically: Motion blur (Temporal aliasing) You can generate multiple images at different time steps and average them together. Not the best solution, but it works. (Pcomb is the program to average images.) Spline surfaces The gensurf program is a general tool for representing functional surfaces as (smoothed) polygons. See ftp://hobbes.lbl.gov/pub/models/podlife.tar.Z for an example of how this is done for NURBS. A real-world camera definition with the camera defined in the scene language Well, I'm not sure why you would want this, but you can always put the view options in the scene file as a comment. Normally, Radiance keeps views separate from scene data, since there is not a one to one correspondence. Animated transformations for objects defined in the description languange, i.e. a spline definition and being able to tag that to an object's transformation. You can use the rcalc program in combination with a format file that looks something like this: !xform -rx ${rx} -ry ${ry} -rz ${rz} -t ${tx} ${ty} ${tz} object.rad You'll have to look up rcalc to figure out what I'm talking about. Hope these hints are enough to get you started. -Greg [P.S. to this message -- the next version of Radiance will have some features for easier modeling of depth-of-field and motion blur.] ========================================================================== ANTIMATTER MODIFIER LOOPS Date: Thu, 7 Sep 95 10:15:03 METDST From: Maurice Bierhuizen Subject: A question about Radiance To: greg@theo.lbl.gov (Gregory J. Ward) Mailer: Elm [revision: 70.85] Hi Greg, Could you help me with the following problem I have with a Radiance calculation? At a certain point RTRACE stops its calculation with the message 'fatal - possible modifier loop for polygon "SURFACE_461_1"'. Could you explain this error to me? My guess is that it has to do with the antimatters I use. I think this because a warning precedes the above error, and it says 'warning duplicate modifier for antimatter "anti_steal"'. The thing is that I can't see what's wrong with its definition. The definition of the antimatter considered is: void antimatter anti_steal 2 steal steal 0 0 To my understanding the definition of this antimatter means that it can cut 'holes' in steal objects and that it shades the hole with steal (but am I right?). Thanks in advance. Greetings, Maurice Bierhuizen. Date: Thu, 7 Sep 95 08:19:21 PDT From: greg (Greg Ward) To: maus@duticg.twi.tudelft.nl Subject: Re: A question about Radiance Hi Maurice, The warning is not really related to the error as far as I can tell. The antimatter specification you have given is indeed redundant -- you should get the same result (minus the warning) if you specify "steal" instead as the single modifier. (By the way, "steal" in English means to take something without permission -- I think you want "steel" here unless you are working in Dutch.) The modifier loop error is usually caused by referring to a modifier that refers eventually back to itself. One example might be: void metal steel 0 0 5 .6 .5 .65 .9 .05 void antimatter anti_steel 1 steel 0 0 # Later in this or another file... void alias steel anti_steel steel sphere sph1 0 0 4 10 -3 8 1.5 ----------------------- What happens above is we think we've "aliased" the modifier "anti_steel" to the modifier "steel" so that we can use it on some objects and they will end up subtracting sections from some other object. But since the definition of "anti_steel" refers to "steel" as one of it's arguments, and we have redefined "steel" to in fact be "anti_steel" -- it refers to itself and we end up with a modifier loop. Modifier loops were impossible in the original Radiance language. This was prevented because every object was modified by the most recent definition of another object in the input stream, and later definitions had no effect. There was no way to make a loop. Only with the advent of antimatter and other primitive types that refer in their string arguments to other modifiers can loops arise. This is because the actual objects those modifiers point to is not resolved until the object is used -- that is, until after the basic scene description has been read in. The reference is therefore resolved according to the last definition of that modifier, which could occur after the referring object. (I hope this is making sense.) Loops can then occur, because reference pointers go forward as well as backward in the file. The upshot is that somewhere in the modifiers affecting your object "SURFACE_461_1" there is an antimatter or something referring to an object that somehow refers back to itself. I hope this helps. -Greg ========================================================================== MODELING SEMITRANSPARENT LIGHT SHELF From: manuel@ise.fhg.de Subject: Transparent Light Shelf To: greg@theo.lbl.gov (Greg Ward) Date: Fri, 8 Sep 1995 16:47:02 +0100 (MESZ) Hi Greg, sorry for disturbing you, but a customer wants a simulation (urgent, as usual) with a partially transparent light shelf: reflection 55% transmission 25 % Any hints who to simulate such a material are gratefully appreciated! (Peter Apian-Bennewitz suggested mixfunc with a mirror and glass, but what would come first, the mirror or the glass?) Many thanks! Manuel ------------------------------------------------------------------- Manuel Goller Fraunhofer Institute for Solar Energy Systems Simulation Group, A608 Oltmannsstr. 5 D - 79100 Freiburg GERMANY Phone: ++49 - (0) 761 - 4588 - 296 Fax: ++49 - (0) 761 - 4588 - 132 Email: manuel@ise.fhg.de ------------------------------------------------------------------- Date: Fri, 8 Sep 95 10:02:22 PDT From: greg (Greg Ward) To: manuel@ise.fhg.de Subject: Re: Transparent Light Shelf Hi Manuel, I wouldn't use a mixfunc between mirror and glass because the calculation won't find the mirror and mark the shelf as a secondary source object. The only thing to do is treat it as a prism2 type. Using the transmission and reflection values you've given, the following specification should work: void prism2 light_shelf_mat 9 0.25 Dx Dy Dz if(Rdot,0.55,0) Dx+2*Rdot*Nx Dy+2*Rdot*Ny Dz+2*Rdot*Nz . 0 0 The spaces must be exactly as shown for these formulas to work inline. Normally, they would be put into a file and only referenced here. (The '.' at the end says no file is needed besides rayinit.cal.) I'm not doing anything fancy in the above to adjust the transmission and reflection as a function of angle, except to set reflection to zero for rays coming from below (positive Dz value). I hope this works! -Greg Date: Tue, 12 Sep 95 16:56:02 PDT From: greg (Greg Ward) To: manuel@ise.fhg.de Subject: Re: Transparent Light Shelf Hi Manuel, Well, I don't usually do this, but I was thinking about your problem with the light shelf again, and I realized that the solution I gave you, although it will work, is not optimal. A more computationally efficient solution is to use the mirror type with BRTDfunc as the alternate material. Specifically: void BRTDfunc one-way-mirror 10 if(Rdot,.55,0) if(Rdot,.55,0) if(Rdot,.55,0) .25 .25 .25 0 0 0 . 0 9 0 0 0 0 0 0 0 0 0 void mirror shelf-mirror 1 one-way-mirror 0 3 .55 .55 .55 shelf-mirror {shelf geometry} The reason for using the BRTDfunc type is that it's the only one that allows you to specify a different front vs. back surface reflectance. The above specification is more efficient than the prism2 route primarily because it generates only a single virtual light source rather than two, one of which is redundant. I realize this probably comes too late, but I thought you should have good information, better late than never. -Greg ========================================================================== MATERIAL MIXTURES Date: Thu, 14 Sep 1995 09:43:03 -0700 From: martin@color.arce.ukans.edu (Martin Moeck) Apparently-To: gjward@lbl.gov Hi Greg, I would like to model a green matte cloth with red shiny metal threads in it. Could you give me a hint on how do do that? Thanks + take care, Martin Date: Thu, 14 Sep 95 08:31:45 PDT From: greg (Greg Ward) To: martin@color.arce.ukans.edu Regarding your cloth with shiny metal threads, this is a good application for the mixfunc type. Use it to mix between a diffuse green material and a red metal material, like so: void plastic diffuse_green 0 0 5 .08 .6 .1 0 .1 void metal red_metal 0 0 5 .8 .05 .08 .9 .05 void mixfunc green_cloth_red_thread 4 red_metal diffuse_green inthread thread.cal 0 0 Of course, writing thread.cal is the difficult part. You need to define the variable inthread to be 1.0 where you want the red thread to be and 0.0 where you want the green cloth. Some kind of if statement is called for. If you need some help with it, I can look into it later. -Greg ========================================================================== RADIANCE VS. POV-RAY [I grabbed this off of network news.] Radiance vs. Pov, by "Cloister Bell" [Jason Black] (cloister@hhhh.org) Jonathan Williams (williamj@cs.uni.edu) wrote: >: My question is, I do basically artistic renderings, and am not incredibly >: hung up on reality. What benefits (if any) does Radiance have over Pov? I'm >: looking at things such as easier syntax (not likely), Easier syntax is certainly not one of Radiance's benefits. Writing Radiance files, until you get the hang of it, is truly tortuous. It's really not that bad once you do get the hang of it, but the learning curve is very steep. I would say that the biggest benefit to the artist is that Radiance's scene description language, while being difficult, supports generic shape primitives and texture primitives. This allows you to create anything (well, within the limits of your machine's memory, etc.) you can think of. POV, by contrast, does limit you to the fixed and not entirely general shape primitives that it has (although heightfields somewhat make up for this, albeit in a less than elegant way), and even more so to the textures that it has. A secondary (or at least, less important to me) benefit is Radiance's more accurate lighting. If you have the CPU to burn (and the patience), you can get much better lighting in your scenes than with POV. POV's strong points are that it does have a heck of a lot of different shape and texture primitives, and that its input language is much simpler. POV is truly a great renderer for the beginning and intermediate computer artist. But it lacks some of the more advanced features that Radiance has, and in the end that's why I switched. jhh@hebron.connected.com (Joel Hayes Hunter) writes [about POV vs. Radiance]: >: more built-in shapes, Well, yes and no. POV has more primitives, but Radiance handles shapes generically. If you like lots of primitives, then that's a plus for POV and a minus for Radiance, but that is also a limitation since you're limited to what you can do with those primitives. Notice the incredibly prevalent use of heightfields in POV to get around limitations of primitives. On the other hand, if you don't mind a little math in order to specify interesting shapes generically, then Radiance wins hands down even though it only has 9 or so object primitives. It has the primitives you are most likely to want, and anything else can be made (with varying degrees of effort) by parametrizing the surface you want to construct and specifying functions of those parameters which yield points on the surface. Clearly Greg Ward (the author of Radiance) had this tradeoff in mind when he wrote Radiance. I'd be willing to bet that he was faced with the choice between implementing dozens and dozens of procedures for handling all sorts of primitives and doing the job once, generically, and so he chose the latter. This tradeoff is a win for shapes that do not easily decompose into geometric primitives, but a lose for ones that do. Some examples are in order (taken from a scene I'm working on currently): 1. Consider a 90 degree section of a tube. In POV this is a trivial CSG object, something like: intersection { difference { cylinder { // the outer cylinder <0,0,0>, // center of one end <0,1,0>, // center of other end 0.5 // radius } cylinder { // the inner, removed, cylinder <0,-.01,0>, <0,1.01,0>, 0.4 } } box { // the 90 degree section of interest... <0,0,0>, <1,1,1> } texture {pigment Blue} } In Radiance, this object is difficult to produce. Consider that it has six faces, only 2 of which are rectangular. Each face has to be described separately: # the rectangular end pieces: polygon blue end1 0 0 12 .4 0 0 .5 0 0 .5 0 1 .4 0 1 polygon blue end2 0 0 12 0 .5 0 0 .4 0 0 .4 1 0 .5 1 # the curved corners of the towelbar !gensurf blue right_outer_corner \ '.5*cos((3+t)*PI*.5)' \ '.5*sin((3+t)*PI*.5)' \ 's' 1 10 -s !gensurf blue right_inner_corner \ '.4*cos((3+t)*PI*.5)' \ '.4*sin((3+t)*PI*.5)' \ 's' 1 10 -s !gensurf blue right_bottom \ '(.4+.1*s)*cos((3+t)*PI*.5)' \ '(.4+.1*s)*sin((3+t)*PI*.5)' \ '0' 1 10 -s # same as previous, but translated up !gensurf blue right_top \ '(.4+.1*s)*cos((3+t)*PI*.5)' \ '(.4+.1*s)*sin((3+t)*PI*.5)' \ '0' 1 10 -s | xform -t 0 0 2 Clearly POV wins hands down on this shape. But that's because this shape has such a simple decomposition in terms of regular primitives (actually, I do Radiance a small disservice with this example, since Radiance does have some CSG facilities which this example doesn't make use of). 2. Consider a curved shampoo bottle (modelled after the old-style "Head and Shoulders" bottle, before they recently changed their shape). This bottle can be described in English as the shape you get when you do the following: Place a 1.25 x 0.625 oval on the ground. Start lifting the oval. As you do, change its dimensions so that its length follows a sinusoid starting at 1.25, peaking soon at 2.0, then narrowing down to 0.5 at the top, while the width follows a similar sinusoid starting at 0.625, peaking at 0.8, and ending at 0.5. By this point, the oval is a circle of radius 1 and is 8 units above the ground. The shape swept out by this oval is the bottle. In Radiance this bottle is described as: # the above-described shape !gensurf blue body \ '(1.25+.75*sin((PI+5*PI*t)/4))*cos(2*PI*s)' \ '(.625+.125*sin((PI+5*PI*t)/4))*sin(2*PI*s)' \ '8*t' 20 20 -s # the bottom of the bottle, which i could leave out since no one will # ever see it: !gensurf blue bottom \ 's*1.7803*cos(2*PI*t)' \ 's*0.7134*sin(2*PI*t)' \ '0' 1 20 # an end-cap, which people will see. blue ring endcap 0 0 8 0 0 8 0 0 1 0 0.5 In POV, well, I'll be kind and not describe how this could be made in POV. The only answer is "heightfields", which a) are tedious to make, and b) take up lots of space. Clearly Radiance kicks butt over POV on this example. That's because this shape doesn't have a simple breakdown in terms of geometric primitives on which one can do CSG, but it does have a simple description in terms of parametric surfaces. So depending on what sort of objects are in your scene, you may decide to go with POV because they're simple and POV makes it easy, or you may decide to go with Radiance because they're not and because you like the feeling of mental machismo you get for being able to handle the necessary math to make "gensurf" (the generic surface constructor) do what you want. >: different lighting and texture models, etc. Radiance does win hands down for lighting simulation. That's what it was written for, and it's hard to compete with something that is the right tool for the job. With textures, you are again faced with the choice of having a system with several primitive textures that you can combine however you like (POV), and a system that gives you many very good material properties and allows you to define your own textures mathematically in whatever way you can dream up. There are textures that Radiance can do that POV simply can't because POV's texture primitives can't be combined to handle it. For example (don't worry, this isn't as nasty as the previous), how about a linoleum tile floor with alternating tiles, like you often see in bathrooms, where the tiles form groups like this: --------- |___| | | | | | | --------- | | |___| | | | | --------- You get the idea. The problem is how do you get the lines to be one color/texture and the open spaces to be another? In POV, the answer is "you use an image map". This is fine, except that it leaves the scene's author with the task of creating an actual image file to map in, for which s/he may not have the tools readily available, and that image maps take up a lot of memory (although probably not for a small example like this), and tweaking them later may not be simple. In Radiance, you can describe this floor mathematically (which is pretty easy in this case since it's a repeating pattern): { a tile floor pattern: } # foreground is yellow, background is grey foreground = 1 1 0 background = .8 .8 .8 xq = (mod(Px,16)-8); yq = (mod(Py,16)-8); x = mod(mod(Px,16),8); y = mod(mod(Py,16),8); htile(x,y) = if(abs(y-4)-.25, if(abs(y-4)-3.75,0,1), 0); vtile(x,y) = if(abs(x-4)-.25, if(abs(x-4)-3.75,0,1), 0); floor_color = linterp(if(xq,if(yq,htile(x,y),vtile(x,y)), if(yq,vtile(x,y),htile(x,y))), foreground,background); Granted, this is horribly unreadable and is rather tricky to actually write. What it boils down to is a bunch of conditions about the ray/floor intersection point (Px, Py) such that some of the points are eventually assigned to be the foreground color, and some the background color. I won't explain the details of how those expressions produce the above pattern, but they do. Also note that I've simplified this example down to one color channel; in an actual Radiance function file you can specify wholly different functions for the red, green, and blue channels of the texture you're defining. The salient feature of this example is that Radiance has facilities (even if they're hard to use) for creating any texture you like, so long as you can describe it mathematically in some form or another. And if you can't, you can always fall back on an image map as in POV. POV, on the other hand, offers you a pile of built in textures, but to create a fundamentally different texture you have to actually add some C code to POV. Many users are not programmers, or may happen to be on a DOS system where you don't actually get a compiler with the system, which makes this solution impractical. And even if they can program a new texture, it will be a long time before it can get incorporated into the official POV distribution and thus be available to people without compilers. Of course, POV has lots of neat ways to combine textures and do some pretty eye-popping stuff, but we all know how much of a speed hit layered textures are. This is, in my opinion, why we see such an overuse of marble and wood textures in POV; marble and wood are reasonably interesting to look at and anything else is either boring, not worth implementing in C, or too slow to do with layered textures. >: I'm not really concerned about speed, as I'm running on a unix box at >: school, but if it's going to take days to render simple images (a ball on a >: checkered field), I'd like to know. I've actually been pretty impressed with Radiance's speed. It compares quite well to POV. I would say that it is on the whole faster than POV, although as scene complexity increases in different ways, that comparison can go right out the window. For example, POV gets really slow when you have layered textures and hoards of objects. Radiance does fine with lots of objects because of the way it stores the scene internally, but also slows down with complex textures. Radiance suffers more (imho) than POV when adding light sources, especially if there are any reflectives surfaces in the scene. This is due to the mathematical properties of the radiosity algorithm. Also, Radiance has a lot of command line options that allow you to cause the rendering to take much much longer in order to further improve the accuracy of the simulation. Life is full of tradeoffs. In the balance I'd say that for the day to day work of previewing a scene that is under development, Radiance is faster. Radiance wins in simple scenes by using an adaptive sampling algorithm that can treat large flat areas very quickly while spending more time on the interesting parts of the image. When you crank up the options and go for the photo-quality lighting with soft shadows, reflections, and the whole works, be prepared to leave your machine alone for a few days. The same is true of POV, of course. >: Also I believe its primitives and textures are quite limited (only a few of >: each). See above. In addition, Radiance has a lot more primitive material types than POV does. POV treats everything more or less as if it were made out of "stuff" that has properties you can set about it. That's fine most of the time, but isn't very realistic; different material types do actually interact with light differently in a physical sense. Radiance gives you different material types to handle those differences. Radiance takes the approach to materials that POV takes to shapes -- lots of primitives. >: But apparently nothing in the whole universe models actual light as >: accurately as this program does, so if that's what you want go for it... Well, it's a big universe, so I'd hesitate to say that. :) But I would venture to say that Radiance is the best free package that you'll find for doing lighting simulation. It's still missing some features that I'd like to see like reflection from curved surfaces [i.e. caustics] and focussing of light through transparent curved surfaces. Of course, I know how hard those things are to implement, so I'm not seriously criticising any renderers out there for not doing them. ========================================================================== PHOTOMETRIC UNITS AND COLOR Date: Fri, 23 Jun 1995 11:15:53 -0700 (MST) From: Vivek Mittal Subject: Exterior and interior illuminance values To: Greg Ward Hi Greg, I was trying to get exterior horizontal illuminance value used by radiance. I know that by using the 'gensky' command , I can get a value for the "Ground ambient level:" which corresponds to the irradiance/pi due to sky without the direct solar component. However, I am not really sure about two things: 1) how to get from the irradiance/pi the actual illuminace level (in fc or lux) 2) is it possible to get the direct solar component also added to this. Also, I am trying to get the interior illuminace values on a grid at workplane level for a room. I used the following command: rtrace -ov -I+ -faa -x 1 -y 84 scene.oct < grid.inp | rcalc -e '$1=54*$1+106*$2+20*$3' > output.cal where 'scene.oct' is my octree file,' grid.inp' is a ascii file containg 84 rows of grid point co-ordinates and direction vectors (0 0 1 for horizontal plane) I am not sure about the factors 54,106,20 I've used here. I got these from the example given by you in the MAN PAGES command 'RTRACE' (page 4) Lastly, when I click on a image generated by 'rpict' and displayed by 'ximage', and press 'l', I get a number corresponding to the illuminace at that point (something like '35L' etc) what are the units of this value and what does the 'L' stand for ? Lux ??? does this number also represent the true illuminance value at that particular point. The reason I am asking is that I tried to XIMAGE a illumination contour plot generated by DAYFACT, and clicked on various points followed by 'l' and the numbers I got were very low compared to the values represented by the contour lines in that image. Please help me clear my confusion in this matter. Thanks -vivek Date: Tue, 27 Jun 95 15:48:09 PDT From: greg (Gregory J. Ward) To: mittal@asu.edu Subject: Re: Exterior and interior illuminance values Hi Vivek, To get from irradiance to illuminance, just multipy by the efficacy value for white light, 179 lumens/watt. To add in the direct component, multiply the radiance of the sun (7.18e6 in the example below) by the solid angle subtended by its disk (6e-5 steradians) and by the sine of the incident angle, which is simply the third component of the source (.9681 below): # gensky 6 21 12 # Solar altitude and azimuth: 75.5 -8.8 # Ground ambient level: 22.7 void light solar 0 0 3 7.18e+06 7.18e+06 7.18e+06 solar source sun 0 0 4 0.038263 -0.247650 0.968094 0.5 void brightfunc skyfunc 2 skybr skybright.cal 0 7 1 3.94e+01 3.11e+01 1.65e+00 0.038263 -0.247650 0.968094 So, using the above gensky output, our sky illuminance value is: 22.7 * 3.1416 * 179 = 1293 lux Note: corrected 8/22/1997 gwl And our solar illuminance value is: 7.18e6 * 6e-5 * .9681 * 179 = 74650 lux And the total is: 1293 + 74650 = 75900 lux (rounding off sensibly) As for the rcalc computation following rtrace, the coefficients you have are OK, but they depend on the definiton of RGB values. This has changed slightly in the latest Radiance release, and the rtrace manual page now recommends: -e '$1=47*$1+117*$2+15*$3' The result will not differ by much, since the sum of coefficients is still 179. The values reported by ximage using the 'l' command are lumens/sr/m^2 for normal Radiance images, or lumens/m^2 (lux) for irradiance images (generated with the -i option to rpict). The 'L' following the value stands for "Lumens". Picking off values in ximage only works on undoctored Radiance pictures. Specifically, the output of pcomb, pcompos and sometimes pinterp is not reliable, and falsecolor uses a combination of pcomb and pcompos to generate its output. Click on the original image if you want luminance, or an irradiance image if you want illuminance. Hope this helps. -Greg From: "Galasiu, Anca" To: "Greg J. Ward" Subject: Re: Radiance info Date: Fri, 30 Jun 95 14:44:00 EDT Encoding: 38 TEXT Dear Mr. Ward, My name is Anca Galasiu and I am writing in behalf of the National Research Council - Lighting Research Group - Ottawa, Canada. I am currently working with Dr. Morad Atif on a building research project using the Adeline software. In the process of modeling the building with Radiance I came across some inconsistencies which hopefully you would be so kind to clear them for me. The disagreement refers to the way the "light" primitive has to be described and input into the program. In the Adeline manual (Radiance reference manual, page 4) it says: " Light is defined simply as a RGB radiance value (watts/rad2/m2). However, in the Radiance user's manual, page 33, the same radiance value is given in watts/steradian/m2. What does "rad2" mean? Are these two units the same thing? There are also some other things, described on page 32 of the same volume, that I could not figure out. At the end of page 32 there is a footnote (which refers to the way one can determine the light source's radiance on his own) saying: "You can use about 15 lumens/watt for an incandescent source, as a general rule of thumb. A 50 watts incandescent bulb is approximately equivalent to 500 lumens." How can these two sentences agree with each other? Following this footnote there comes a paragraph that explains how to compute the radiance value of a source. In this paragraph there is a sentence that says: "convert lumens to watts (?) - multiply by 1 watt / 179 lumens). The radiance value is supposedly obtained in watts/steradian/m2. As one can see, the information given in the manual is very confusing for a first-time user of Adeline and I still don't know how to input the "light" primitive when the only information we have about the light sources in the building are lumens and watts. Your help in answering these questions would be very much appreciated and we thank you in advance for your time. Anca Galasiu NRC - IRC - Building Performance Laboratory Ottawa, Canada Ph: (613) 993-9613 Fax: (613) 954-3733 E-mail: galasiu@irc.lan.nrc.ca Date: Fri, 30 Jun 1995 16:36:32 -0800 To: greg (Gregory J. Ward) From: chas@hobbes.lbl.gov (Charles Ehrlich) Subject: Re: Adeline support question Cc: greg, chas Anca, My name is Charles Ehrlich (Chas). I am the contact person for Adeline support questions. Your question, however, is of interest to the general Radiance community because it deals with the ongoing problem of incomplete documentation of Radiance. Your difficulties in describing light source primitives are justified and shared by many, especially beginners. In my explanation I hope not to further confuse you, but if I do, please feel free to ask for clarifications. The unit of light used as input to Radiance light source primitives is luminous flux expressed in Watts/Steradian/Meter squared. (rad2 was Greg's unfortunate short-hand for Steradian in the early days.) As such, luminous flux is a "unitless" measure. The watts we are speaking of are not the same as the "wattage" of the lamp (bulb) used in the luminaire itself, of course. The reason for using luminous flux (a dimension-independant value) is that the rest of the rendering engine need not worry about what units of measure (feet versus meters) the geometry of the scene uses, so long as all units throughout the model are consistant. As such, it IS important to scale a light source from inches to meters, for example, if the luminaire is modeled in inches and the scene is modeled in meters. If you don't know the number of lumens of your particular lamp, convert lamp wattage to lumens by multiplying the wattage of the lamp by the efficacy of that particular type of light source (incandescent=14 lumens per watt). Here is an outline of the process I use to describe a luminaire. First I'll assume that you don't have an IESNA candlepower distribution file for the luminaire, or you may have one but it won't work with ies2rad, or you may have one that you don't completely trust, or you have some other candlepower distribution data that won't work with ies2rad. 1. Count the number of lamps in the luminaire and note the wattage and source type (incandescent, fluorescent, etc.) Decide if source color rendition is an important aspect of your simulation. Understand that non-white sources will require that the images be color-balanced back to white, basically undoing the effort spent to create the colored sources in the first place. My advice is to assume white sources unless the scene uses two source types that vary greatly in their color spectra...and this difference is an important aspect of your simulation. 2. Calculate the area of the luminaire "aperature" in the unit of measure of your choice and choose the Radiance geometry primitive type which best describes the aperature. For a "can" downlight with a circular opening, the surface primitive would be a ring. For a 2x4 troffer, it would be a polygon. For a fiber optic light guide, it would be a cylinder. 3. Calculate the area of the luminaire aperature. 4. Determine the total number of lumens created by the luminaire's lamps. 5. Multiply the total lamp lumens by the luminaire efficacy, dirt and lumen depreciation factors, etc. to come up with the total lumens you expect the luminaire to deliver into the space. Use this value for the lumen output of the luminaire. 6. Use the lampcolor program to compute radiance using the above values. If you prefer to do it by hand this is the formula for white sources: (for colored sources use lampcolor). radiance = lumens/luminaire aperature area/(WHTEFFICACY*PI) where WHTEFFICACY=179, the luminous efficacy of white light. Because we're assuming that our source is white, the red, green and blue components of radiance are the same. Now to actually create the Radiance primitives, use this template: # # luminaire.rad incandescent can luminaire, origin at center # # units=inches # void light lamp_name 0 0 3 Red_radiance Green_radiance Blue_radiance lamp_name ring can_downlight 0 0 8 0 0 -.00001 0 0 -1 0 luminaire aperature area .....OR....... lmap_name polygon 2x4_troffer 0 0 12 1 2 -.00001 1 -2 -.00001 -1 -2 -.00001 -1 2 -.00001 What is backwards about this process is that the luminare aperature geometry (and therefore the area of the aperature) is defined after the luminaire luminous output, but the output is dependant upon the area of the luminaire aperature. Make sure that the area of the geometry primitive you create matches the area you specified with lampcolor. Make sure that the surface primitive's surface normal is oriented in the direction you want. For a ring, you specify this explicitely with a x, y, z vector, for a polygon the right and rule determines which side of the polygon is the "front" surface=the direction of output. For a cylinder, make sure you use a cylinder and not a tube (orientation is out). I reccommend that you orient your luminaire toward negative Z axis if the luminaire is normally ceiling or pole mounted, or toward the positive x axis if it wall mounted. Locate the luminaire geometry such that it's axis of symmetry is along the z axis for ceiling mounted luminaires with its output surface a small fraction below the origin (0,0,-.00001) so that the ceiling surface itself does not occlude the output geometry. For non-symmetrical fixtures, use some other known "mounting" point as the origin. Always document in the fixture file where it is located for future reference using the radiance #comment at the begging of the commented lines. It is a good idea to describe the fixture geometry and output material together in one separate file so that this file can be moved in space using xform. A luminaire layout file then looks like: # # layout1.rad Four luminaires at 8 feet on center # ceiling height=9 feet # luminaire units=inches, scaled to feet # !xform -s .0833333 -t 0 0 9 luminaire.rad !xform -s .0833333 -t 0 8 9 luminaire.rad !xform -s .0833333 -t 8 0 9 luminaire.rad !xform -s .0833333 -t 8 8 9 luminaire.rad >At the end of page 32 there is a footnote (which refers to the way one >can determine the light source's radiance on his own) saying: "You can use >about 15 lumens/watt for an incandescent source, as a general rule of thumb. >A 50 watts incandescent bulb is approximately equivalent to 500 lumens." > How can these two sentences agree with each other? The footnote is incorrect. A typical incandescent lamp efficacy is really 14 lumens per lamp watts: 50*14=700 lumens Then reduce the lamp lumens by 20-25 percent efficacy of the luminaire: 700*(1-.25)=525 lumens. So Greg wasn't all that far off. >Following this >footnote there comes a paragraph that explains how to compute the radiance >value of a source. In this paragraph there is a sentence that says: >"convert lumens to watts (?) - multiply by 1 watt / 179 lumens). The >radiance value is supposedly obtained in watts/steradian/m2. Here's where you need to understand the difference between lamp wattage and luminous flux, which is also a unit of energy which is represented in Radiance generic watts. 179 is the luminous efficacy of white light over the visible spectrum. The units of luminous efficacy are 179 lumens/watt. So our footnoted 50 Watt incandescent lamp puts out 2.9329608939 (525/179) watts of radiance (luminous flux). >As one can >see, the information given in the manual is very confusing for a first-time >user of Adeline and I still don't know how to input the "light" primitive >when the only information we have about the light sources in the building >are lumens and watts. If I had my way about it, lampcolor would output the luminaire primitives. I suggest that you get ahold of some IESNA Candlepower Distribution files and convert them for the purpose of understanding what happens. This method only works for single-surface emmitting luminaires. As soon as you attempt to model a more complex emmitting surface (for example, a cylinder with ring "ends" or a prismatic extrusion, then the output of the combined luminaire geometry must account for incident angle because the emmitting surface has a projected area greater than a single-surface emmitter, and because one ray is traced to _each_ surface primitive which is described as a light source. In the file /usr/local/lib/ray/source.cal you will find functions for correcting for the projected area of box sources, flat sources, and other such things. As such, the method described above is a simplification. Using source.cal is a subject of a later message and probably more than you need to worry about right now. I'd like to hear more about your trials and tribulations with Adeline. I don't promise to always spend as much time responding as I did this time, but I'd like to keep in touch. Another ADELINE user recently had the following problem. His scene, while rendering, consumed more than his total physical RAM and began "swapping" to the hard disk (virtual memory). He got impatient with the time it was taking but could not stop the rendering by any method except by pressing the "reset" button. After doing so, his entire hard disk was "rendered" (sorry for the pun) useless and had to be re-formatted. So, beware and make backups! -Chas Charles Ehrlich Principal Research Associate Lawrence Berkeley National Laboratory (510) 486-7916 Via: uk.ac.unl.clstr; Mon, 19 Jun 1995 15:39:56 +0100 Date: Mon, 19 JUN 95 15:40:17 BST From: HKM5JACOBSA@CLUSTER.NORTH-LONDON.AC.UK To: greg Subject: IES files Hi, Greg! I am currently writing the report about my practical year here at the Uni of North London where I was doing this project for the British Sport Council. My plan is to make the report a special one, like a sort of tutorial for the use of IES files in RADIANCE. I got really into this topic, since this is what the project was all about. So I want to make the knowledge I got available for my collegues. Just to make sure I am not writing any fairytales in it, I have got a question concerning the multipliers that can be specified when converting ies2rad. Is it correct that it doesn't matter whether the candela multiplier that has to be set to get the right lumen output is given with the -m option or as value in the ies file itself (after , as third value). Both seem to give the same results. Furthermore, I think that the interface doesn't pay any attention to the value for (second one). Correct? Another problem I got is the -f option for pfilt. It is supposed to colour balanced the image as if the specified luminaire was used. However, when I tried to use it like this, the results were just horrible. The picture became eighter completely red, green, or blue, depending on what lamp I gave as a parametre. When I run the pictures with the lampcolour defined in the input file, the results were entirely different (far more realistic, you know?) Have you any idea what this could be due to? I am relatively sure that I used the command in the right form, since it didn't produce any error messages. Example: pfilt -t "high pressure sodium" project.norm.pic > project.son.pic Seems alright, doesn't it? Please let me know if you have any suggestions on this. Looking forward to hearing from you. Axel Date: Tue, 27 Jun 95 11:38:16 PDT From: greg (Gregory J. Ward) To: HKM5JACOBSA@CLUSTER.NORTH-LONDON.AC.UK Subject: Re: IES files Hi Axel, You are right about the multipliers as interpreted by ies2rad. The -m option is the same as the multiplier in the input file, and the lumens per lamp (and the number of lamps) is not used. There are also a couple of other multipliers in the input file, for ballast factor and power factor that are also in there. You are using the -t option to pfilt correctly, but it is only to balance to white an image that was rendered using a color light source, which is the case if ies2rad is run with -t using the same lamp or the IES input file has a comment line that matches that lamp based on the lamp table (lampdat) file used. -Greg Date: Mon, 21 Aug 95 10:19:10 PDT From: greg (Greg Ward) To: burr%cory.coryphaeus.com@lbl.gov, burr@stobart.coryphaeus.com Subject: Re: Radiance vs. Irradiance Values Hi Tim, > I know that Radiance is defined as the energy per unit area per unit time > leaving a surface. What I don't know is how this relates to illumination > intensity as say measured via the HSV or RGB model. I found some example > light definitions that gave an RGB radiances of 100 100 100 and defined > this equivalent to a 100 watt light bulb. Is the radiance equivalent to > wattage? No, that must have either been a mistake or an unfortunate coincidence. The units of radiance are watts/steradian/sq.meter, and there are many things going on between the 100 watts put into a lamp and the value that comes out in terms of radiance. Specifically: 1) The lamp is not 100% efficient (closer to .08% for an incandescent). 2) The output is distributed over the visible spectrum. 3) The output is distributed over the surface of the lamp. 4) The output is distributed over all outgoing directions. Although it is not exactly the "right" thing to do, most light sources in a simulation are created as white (i.e. equal-energy) sources, so that color correction on the final result is unnecessary. If you were to use the appropriate spectrum for an incandescent for example, everything would come out looking sort of a dull orange color. Therefore, number 2 above is usually skipped. Number 3 is important -- after figuring the amount of actual radiated energy, you must divide by the radiating area of the light source surface in square meters. (This is for a diffusely radiating source.) A sphere has a surface of 4*pi*R^2, where R is the radius. A disk has a one-sided area of pi*R^2. Finally, you must divide by pi to account for the projected distribution of light over the radiated hemisphere at each surface point. (This accounts for the 1/steradian in the units of radiance.) Thus, the approximate formula to compute the watts/steradian/sq.meter for an incandescent spherical source is: Input_Watts * .0008 / (4*pi*R^2 * pi) For a 100 watt incandescent to end up with a radiating value of 100 w/sr/m^2, it would have to have a radius of just 4.5mm, which seems pretty small. It would have to be a frosted quartz-halogen, I guess. More likely, the file was just plain wrong. By the way, the program "lampcolor" is set up to do just this sort of simple calculation, so you don't have to. > When I run rview and then trace a ray I get output like the > following: > > ray hit walnut_wood polygon "floor_1.8" > at (14.3225 14.4806 0.008001) (4.17902) > value (0.084617 0.050445 0.036884) (10.5L) > > What is the last term - i.e. (10.5L)? I assume this stands for "luminance". > When I trace a ray directly to a light source I get a "luminance" value > way up there - i.e., around 18,000L. The "L" stands for "lumens", and in this case it is lumens/steradian/sq.meter which is the SI unit of luminance as you surmized. If you had been calculating with the -i option in effect, this would be interpreted as lumens/sq.meter, or lux instead. The conversion between lumens and watts in Radiance is *defined* as 179 lumens/watt, which is the approximate efficacy of white (equal-energy) light over the visible spectrum. > I guess all I want is to normalize these luminance values to the range > 0..1 so that I can modulate existing texture maps with the luminance. You cannot put physical units that don't belong in a range into a range without losing something. In the case of radiation, you are saying that above a certain value, you are going to start clipping the output of surfaces. > It doesn't seem like torad and fromrad are in sgicvt.tar.Z. The listing > of what this file contains is below. > > rwxr-xr-x 2015/119 95124 Apr 4 11:21 1995 obj2rad > rwxr-xr-x 2015/119 132284 Apr 4 11:21 1995 mgf2rad > rwxr-xr-x 2015/119 93284 Apr 4 11:21 1995 rad2mgf Oops! My mistake -- sorry! I had forgotten about these. I put a new tarfile on there with the fromrad and torad programs, this time called sgirgb.tar.Z so there would be less confusion. > I'm trying to use rtrace to capture the illuminance at certain sample > points in the radiance scene. As I mentioned before I'm writing a > program (actually an option off of our Designer's Workbench Modeler), > that runs thru the list of textures (the entire scene is textured), > and shoots a ray (or effectively samples the scene), at the 3D position > corresponding to each texel in each texture map. I only want to > consider diffuse illumination so I seems that I want to run rtrace > with the -I option. To get familiar with rtrace I used rtrace to > sample on a line that ran across a wall in my scene. This wall > was illuminate by two lights so there is some definite illumination > gradients that this line is cutting across. I ran rtrace both with > and w/o the -I option. I'm a bit confused by the output of the -I > option and was hoping that you could clarify things a bit. > > Below I have included both the sample file and output file for both > runs. > > rtrace -ov test.oct < samples > out (RUN w/o -I option) > --------------------------------------------------------------- > > "samples" --------------------------------------------------------- > > 15.3815 16.6402 2.41717 1.0 0.0 0.0 > 15.3815 16.3345 2.4156 1.0 0.0 0.0 > 15.3815 16.0287 2.41404 1.0 0.0 0.0 > 15.3815 15.723 2.41247 1.0 0.0 0.0 > 15.3815 15.4173 2.4109 1.0 0.0 0.0 > 15.3815 15.1115 2.40934 1.0 0.0 0.0 > 15.3815 14.8058 2.40777 1.0 0.0 0.0 > 15.3815 14.5001 2.4062 1.0 0.0 0.0 > 15.3815 14.1944 2.40463 1.0 0.0 0.0 > 15.3815 13.8886 2.40307 1.0 0.0 0.0 > 15.3815 13.5829 2.4015 1.0 0.0 0.0 > > "out" -------------------------------------------------------- > > #?RADIANCE > oconv standard.mat 2nd_lights.mat 2nd.rad 2nd_lights.rad > rtrace -ov > SOFTWARE= RADIANCE 2.5 official release May 30 1995 LBL > FORMAT=ascii > > 7.803375e-03 4.652013e-03 3.401471e-03 > 1.488979e+00 8.876603e-01 6.490420e-01 > 7.765268e-03 4.629295e-03 3.384861e-03 > 7.769140e-03 4.631604e-03 3.386549e-03 > 7.733397e-03 4.610295e-03 3.370968e-03 > 8.181990e-03 4.877724e-03 3.566508e-03 > 8.250995e-03 4.918863e-03 3.596588e-03 > 1.238935e+00 7.385956e-01 5.400484e-01 > 7.650747e-03 4.561022e-03 3.334941e-03 > 1.002132e-02 5.974249e-03 4.368268e-03 > 1.750831e-02 1.043765e-02 7.631827e-03 > > > > rtrace -I -ov test.oct < samples2 > out2 - RUN w/ -I option > -------------------------------------------------------------------- > 16.3815 16.6402 2.41717 -1.0 0.0 0.0 > 16.3815 16.3345 2.4156 -1.0 0.0 0.0 > 16.3815 16.0287 2.41404 -1.0 0.0 0.0 > 16.3815 15.723 2.41247 -1.0 0.0 0.0 > 16.3815 15.4173 2.4109 -1.0 0.0 0.0 > 16.3815 15.1115 2.40934 -1.0 0.0 0.0 > 16.3815 14.8058 2.40777 -1.0 0.0 0.0 > 16.3815 14.5001 2.4062 -1.0 0.0 0.0 > 16.3815 14.1944 2.40463 -1.0 0.0 0.0 > 16.3815 13.8886 2.40307 -1.0 0.0 0.0 > 16.3815 13.5829 2.4015 -1.0 0.0 0.0 > > #?RADIANCE > oconv standard.mat 2nd_lights.mat 2nd.rad 2nd_lights.rad > rtrace -I -ov > SOFTWARE= RADIANCE 2.5 official release May 30 1995 LBL > FORMAT=ascii > > 1.571468e-01 1.571468e-01 1.571468e-01 > 2.996901e+01 2.996901e+01 2.996901e+01 > 1.563788e-01 1.563788e-01 1.563788e-01 > 1.564564e-01 1.564564e-01 1.564564e-01 > 1.557390e-01 1.557390e-01 1.557390e-01 > 1.647728e-01 1.647728e-01 1.647728e-01 > 1.661623e-01 1.661623e-01 1.661623e-01 > 2.493524e+01 2.493524e+01 2.493524e+01 > 1.540738e-01 1.540738e-01 1.540738e-01 > 2.016437e-01 2.016437e-01 2.016437e-01 > 3.519261e-01 3.519261e-01 3.519261e-01 > > Questions: > > 1) According to the rtrace man page if you use the -I option the > origin and direction vectors are interpreted as measurement > and orientation. I understand the measurement to mean sample > point but I'm not sure why you need an orientation vector > in this case. The orientation vector is necessary because irradiance (and illuminance) are oriented quantities. That is, they look out on some hemisphere defined by some point and direction. > 2) Why radiance values for each of the R, G and B components different > for each ray but the irradiance values for each component the same > for each sample point? The difference is that for the values traced at the wall (w/o the -I option), you are figuring the color of the wall itself into the calculation, while the values away from the wall (w/ -I) are only considering the output of the light sources (though they would also include color bleeding from other surfaces if -ab was 1 or greater). If you had used the -i option during the first run, you would have gotten roughly equal values as this causes the initial surface intersected to get replaced by an illuminance calculation. I hope this clarifies things. -Greg Dear Radiance Users, It's been some time since I last put out a Radiance Digest -- almost a year! It has been a very busy time for me (and probably for you as well), so let me bring you up to date with a few changes. First, I am leaving LBNL to work for Silicon Graphics, Inc. starting May 5th. I am taking this opportunity to normalize my name usage everywhere to "Gregory Ward Larson," which has been my legal name since 1985 when I was married, though I never bothered to change it at work. (My wife and I picked this new, neutral last name to share with our two daughters, now 4 and 6 years of age. Larson is actually Cindy's maternal grandmother's maiden name, so you can think of it as a kind of affirmative action in a paternal society.) My new e-mail address at SGI is "gregl@sgi.com", but the old addresses should continue to work for some time. I will send out my official mailing address as soon as I figure out what it is. Second, "Rendering with Radiance: The Art and Science of Lighting Visualization," is nearly completed. We have a new publisher, which makes us very happy -- Morgan Kaufmann. The book, which will be accompanied by a CD-ROM, shall be available for purchase sometime in early 1998. We expect the list between $80 and $90, which is a lot for a book, but not much for software. We don't expect to sell a whole lot of them, or to get rich from it. Six authors have worked on the project. Rob Shakespeare of Indiana University and I wrote the bulk of it, with Charles Ehrlich, John Mardaljevic, Peter Apian-Bennewitz and Erich Phillips lending their expertise in the application chapters. Third, Charles Ehrlich will be taking over most of my Radiance support duties in my stead, though technically, we were never supposed to be doing support, anyway. His e-mail address is "CKEhrlich@lbl.gov", for those of you who don't know. He has been the ADELINE contact person for over a year now, and knows quite a bit about Radiance as he has been using the software intensively since 1988 or so. I will still be working on Radiance as time permits in my new job, and I hope that SGI will support it as part of their optional software distribution with their machines. And now, without further ado, the index for this digest: OPACITY MAPPING - using mixfunc to make holey materials RPIECE - best use questions for parallel rendering 3D STUDIO - translating from 3D Studio to Radiance COMPUTING REFLECTANCES - what is the total reflectance of a material? RTRACE QUESTIONS - using the -I option and computing radiosities PROCEDURAL TEXTURES - displacement mapping and efficiency issues MIST - questions about the new mist type MKILLUM QUESTIONS - how best to apply mkillum to daylighting COLORED LIGHT SOURCES - accounting for colored light sources All the best, -Greg +---------------------------------------------------------------------------+ | PLEASE AVOID THE EVIL REPLY COMMAND, AND ADDRESS YOUR E-MAIL EXPLICITYLY! | +---------------------------------------------------------------------------+ | Radiance Discussion Group mailing list: radiance-discuss@hobbes.lbl.gov | | Mail requests to subscribe/cancel to: radiance-request@hobbes.lbl.gov | | Archives available from: ftp://hobbes.lbl.gov/pub/discuss | +---------------------------------------------------------------------------+ ================================================================= OPACITY MAPPING To: greg@hobbes.lbl.gov Date: Thu, 27 Jun 1996 19:29:02 EDT From: Takehiko Nagakura Greg; How are you? I am the MIT professor using radiance for architecture space rendering, if you remember me. I have a quick question. I would be glad if you could give me a tip. Is there any way to do opacity map in radiance? All I would like to do is to make a greyscale image and use the greyness to be translated into degree of transparency. I know how to do a color map, but cannot figure out how to do opacity map. Hope I am not bothering you. Thanks very much. Takehiko Nagakura (Assist. Prof. of architecture, MIT) Date: Sat, 29 Jun 96 17:45:10 PDT From: greg (Gregory J. Ward) To: takehiko@MIT.EDU Subject: opacity maps in Radiance Yes, I remember you, and yes, there is a way. In 3.0 (available at our ftp site, hobbes.lbl.gov), use the "mixdata" primitive with a converted image file, using "void" as one of your two modifiers. In version 2.5, you could do the same thing, but you would have to use a trans primitive with 100% transmittance instead of "void". To convert the image, I suggest you use pvalue, and edit the output. Something like this: % pvalue -d -b Radiance_picture > data_file % vi data_file The top of the file will look something like this: #?RADIANCE ra_tiff -r pfilt -x /3 -y /3 -1 -r .6 pvalue -d -b FORMAT=ascii -Y 133 +X 270 2.639e-01 2.639e-01 2.639e-01 2.639e-01 ... and so on Edit it so that you have a valid data file, i.e.: # Data file produced from a picture: ##?RADIANCE #ra_tiff -r #pfilt -x /3 -y /3 -1 -r .6 #pvalue -d -b #FORMAT=ascii 1 0 133 # U image coordinate, decreasing 0 2.03 270 # V image coordinate, increasing 2.639e-01 2.639e-01 2.639e-01 2.639e-01 ... and so on Then, apply this in a opacity map as follows: void plastic my_material 0 0 5 .5 .3 .7 .05 .02 void mixdata my_mixture 7 my_material void mymapping data_file mymapping.cal my_u my_v 0 0 The file "mymapping.cal" then contains definitions of the function mymapping(b), my_u and my_v, which indicate the mapping from grey pixel value to opacity, and the U and V picture coordinates, respectively. A really simple example is as follows: mymapping(b) = max(b,1); my_u = Px; my_v = Py; This assumes the object in question is located in the positive quadrant of the XY plane in world coordinates, and its size corresponds to the dimensions we entered for the image, namely 2.03 units in X and 1 unit in Y. The image pixel brightness values also equal the opacity we desire. (The max function is just to limit the range from 0 to 1, to avoid any confusion even though the routines don't break if we go over 1.) You could then add a transformation to the end of the mixdata primitive's string arguments to move it to a different location and size. I hope this gives you the idea. If you do this in 2.5, you should leave off the #-delimited comments in the data file, and use a transparent primitive rather than "void", i.e.: void trans invisible 0 0 7 0 0 0 0 0 1 1 -Greg ================================================================= RPIECE From: "Alois Goller" Date: Mon, 2 Sep 1996 14:39:05 -0600 To: "Gregory J. Ward" Subject: Parallel Radiance Hi Greg, back from holidays, I did some timing measurements. I got similar figures in the range you report in "Parallel Rendering on the ICSD SPARC-10's" (radiance/doku/Notes/parallel.html): Dividing the image in rather large patches, (about 5 to 10 per processor) and having each processor to render more than 20 minutes per patch, it results in a good CPU utilization (more than 98% continuously). As you already mentioned, some of the processors are idle at the very end. This causes the efficiency to be at about 85% in my cases. However, rendering smaller files, and using many pieces, there was a surprise: | rpict (1) | rpiece (1) | rpiece (3) | rpiece (10) --------+---------------+---------------+---------------+--------------- A | 1:49.66 | 19:02.09 | 14:22.93 | 14:48.25 B | 5:26.79 | 17:38.85 | 12:49.82 | 14:23.04 C |17:03.23 | 29:11.56 | 14:15.85 | 15:12.59 C is 2 times larger than B (x and y, resulting in the fourfold number of pixels), B is 2 times larger than A. This fact can be also obtained by looking at the second column (rpict(1)) denoting rpict running on a single processor. Involving rpiece with 1, 3, and 10 processors needs constantly about 15 minutes, at different CPU utilization levels ranging from 1% to 60%. Have you experienced similar results? Is there something wrong with our NFS, rpiece, etc.? -- +----------------------------------------------------------------------+ | Alois GOLLER | | Parallel and Distributed Computing (for Remote Sensing Applications) | | Institute f. Computer Graphics and Vision, Technical University Graz | | Muenzgrabenstrasse 11, A-8010 Graz, Austria | +-----------------------------------------+----------------------------+ | Tel. +43 (316) 873-50 25 \ FAX +43 (316) 873-50 50 | | E-mail goller@icg.tu-graz.ac.at \ Home +43 (316) 80 46-67 | | WWW http://www.icg.tu-graz.ac.at/alGo.html \ or +43 (4842) 67 51 | +---------------------------------------------+------------------------+ Date: Tue, 3 Sep 96 09:50:09 PDT From: greg (Gregory J. Ward) To: goller@icg.tu-graz.ac.at Subject: Re: Parallel Radiance Hi Alois, I would never attempt to render with rpiece any job that used less than an hour of CPU time on a single processor -- I don't think it's worth distributing such a small job. In particular, you should not break a short job into too many small pieces, as each piece will complete too quickly and you'll end up with NFS lock manager bottlenecks, which is what I assume you are seeing here. Rpiece is waiting around for I/O completion most of the time. Either use larger pieces on a shorter job, or else don't use rpiece at all if it's too short. I haven't done exetensive enough testing to give you more accurate information than this, but I expect if you did some parametric runs that you would find an optimum somewhere around a 5-10 minutes per piece in rpiece -- longer if you have more rpiece jobs running. The NFS lock manager can be VERY slow on some systems. -Greg P.S. Thanks for sharing your results with me! ================================================================= 3D STUDIO Date: Mon, 16 Sep 1996 14:47:25 -0400 From: James_F_Todd@email.whirlpool.com (James F Todd) Subject: Radiance File Translators To: gjward@lbl.gov I have just begun looking at the Radiance Website, and I had a question: Does there exist a translator in existence that allows the import of POV, DXF, or 3DS models into the Radiance renderer? This may be a very stupid question, but I'm more involved with the creative side of things, not the technical. The reason I ask about those three is that I work with 3DStudio, and I have utilities to convert between its native 3DS format and either DXF or POV. Thanks in advance for any information you can provide. JT Date: Mon, 16 Sep 96 13:49:25 PDT From: greg (Gregory J. Ward) To: James_F_Todd@email.whirlpool.com Subject: Re: Radiance File Translators Hi JT, There are a couple of programs by which you can get from 3DS to Radiance. The first converts 3DS into MGF -- a format I developed which is physically- based and fairly compatible with Radiance. From there, we have a translator into Radiance. I recommend this route rather than going from DXF, which loses most of the pertinent material information needed for rendering. -Greg ================================================================= COMPUTING REFLECTANCES Date: Fri, 20 Sep 1996 17:57:33 -0400 (EDT) To: greg@hobbes.lbl.gov (Gregory J. Ward) From: jedev@visarc.com (John E. de Valpine) Subject: Material Reflectances Greg: I'll give the parallel rendering a try soon. I'll probably try doing a network rendering first before footing the bill for a new cpu. On another note, I am trying to get a better understand of how reflectance values relate to Radiance material definitions. I gleaned the following from the digests: General: refl = (0.3)(r) + (0.59)(g) + (0.11)(b) Plastic: refl = [(0.263)(r) + (0.655)(g) + (0.082)(b)](1 - spec) + spec Metal: refl = (0.263)(r) + (0.655)(g) + (0.082)(b) How are these derived? How do these relate to ro_d ro_si ro_s ro_a in the reflection model described in materials.1? How are the weights derived? If I understand correctly in the case of plastic the diffuse reflectance would be: pC(1-r_subS) where p = <1,1,1> C = To: greg@hobbes.lbl.gov Subject: Reflectance factor Hi Greg! When I have a material type plastic, Could I say its reflectance is the three components RGB? For example: void plastic material 0 0 3 0.64 .064 0.63 This material is not completely white, how can I define reflectance value? Because if it was white I think I could say its reflectanve is equal the value of three components RGB, couldn't I? Thanks, Bye, Rosinda Date: Fri, 15 Nov 96 09:31:54 PST From: greg (Gregory J. Ward) To: rduarte@lge3.dee.uc.pt Subject: Re: Reflectance factor Hi Rosinda, It's nice to hear from you again. I thought you had left the university, because your e-mails have been bouncing from the Radiance discussion group, and I had to remove you from the list. If this gets through, I'll add you back in again. In the meantime, you can look at the /pub/discuss directory on hobbes.lbl.gov to see what you've missed. The reflectance of plastic can be computed from the RGB, specularity and roughness as follows: total_reflectance = arg4 + (1-arg4)*(.265*arg1+.67*arg2+.065*arg3) where arg1-arg4 are the first four of five real arguments to the primitive. -Greg ================================================================= RTRACE QUESTIONS Date: Wed, 25 Sep 96 19:14 MDT From: dxs@november.diac.com To: gjward@lbl.gov i have a question about rtrace. i ran the program with the following input points and command options in a options file. i noticed that the input value for the x coodinate in the output data was incorrect, it was 8.5 instead of 7.5. my question is: is rtrace using my input value of 7.5 and calculating with it and just not outputing it correctly? or am i using incorrect options? i am running on linux, using a pentium. (this machine is new, so i am sure that it does not have the divide error). thanks, dan stanger sc3.pts 7.5 7.5 2.6 1.0 0.0 0.0 sc3.out #?RADIANCE oconv sc3.rad rtrace -oodv -I -dp 128 -ar 52 -ds 0.25 -dt 0.1 -dc 1.0 -dr 1 -sj 0 -st 1 -aa 0 -ab 10 -as 0 -lw .02 SOFTWARE= RADIANCE 3.0 official release June 19, 1996 FORMAT=ascii 8.500000e+00 7.500000e+00 2.600000e+00 -1.000000e+00 -0.000000e+00 -0.000000e+00 2.995701e+00 2.995701e+00 2.995701e+00 Date: Thu, 26 Sep 96 09:11:40 PDT From: greg (Gregory J. Ward) To: dxs@november.diac.com Subject: rtrace -I weirdness Hi Dan, You're not doing anything incorrectly, and there's nothing wrong with your system. In fact, the -I option of rtrace causes some of the output to appear a bit strange because of the way it's computed. Normally, ray tracing starts from a point and heads in some direction. With the -I option to rtrace, we are asking it to start from an intersection at a virtual surface and look at the hemisphere centered on some normal vector. To trick the calculation into doing what we want, we create a virtual ray intersection with this virtual surface by starting off a ray 1 unit above the surface point and directing it towards are desired "origin." In fact, this means that the ray origin is 1 unit above our specified point, so that is why the -oo output option gives you this difference. I suggest instead that you use -op to get the point echoed in the case of rtrace -I. -Greg From: "Valois Jean-Sebastien" Date: Thu, 26 Sep 1996 08:56:47 -0400 To: GjWard@lbl.gov Subject: Question. Hi Mr Ward, How are you ? I was wondering if it is possible to: 1. "print" the values of the variables in a ".cal" file ? (debugger) 2. simulate the noise (sampling) effect of a lens of a camera ? 3. simulate the various behavior of a camera to different lighting conditions ? Thank you. Sincerely Jean-Sebastien From greg Thu Sep 26 09:25:53 1996 Return-Path: Date: Thu, 26 Sep 96 09:25:35 PDT From: greg (Gregory J. Ward) To: valois@cim.mcgill.ca Subject: Re: Question. Status: R Hi Jean-Sebastien, In answer to your questions: > 1. "print" the values of the variables in a ".cal" file ? (debugger) There is a shell script called "debugcal.csh" that may be found in the ray/src/util directory of the Radiance distribution. Copy it from there to your bin directory with the name "debugcal" then set the mode to execute with "chmod 755 {bin_directory}/debugcal". Run the "rehash" command then in your scene directory call: % ximage {picture} | debugcal {octree} -f {calfile} -e '{rcalc_code}' Where {picture} is a picture rendered with your problem {calfile}, and {rcalc_code} is a set of assignments to output variables you want to look at, e.g. '$1=wood_dx;$2=wood_dy;$3=wood_dz' -- that sort of thing. Once your picture is displayed, hit the middle mouse button on points in the image where you want to see the computed variables. > 2. simulate the noise (sampling) effect of a lens of a camera ? If you mean the depth of field, there is a script in the Radiance 3.0 distribution called "pdfblur" that may be used in conjunction with pinterp to produce depth of field blurring. Motion blur is possible using a similar technique with the "pmblur" script, but this is usually applied within the new animation control program, "ranimate." > 3. simulate the various behavior of a camera to different lighting > conditions? I'm not sure exactly which effects you mean. Can you be more specific? I am working at this moment on some techniques and algorithms for adaptive display based on the human visual system, so you can look for that in the next release. -Greg From: bits1@teil.soft.net (BITS TRAINEES grp-1) Subject: rtrace and meshing To: greg@hobbes.lbl.gov Date: Thu, 17 Oct 1996 09:57:17 -0800 (PST) Hello Greg, I am doing a project in Visual simulation which involves adding diffuse lighting values to the scene model before it can be used for visual simulation purpose. I am using RADIANCE for the above purpose. I really find Radiance a very cool package for the purpose. I however have some doubts in using it for my project. They are : 1) I am computing the radiosity at each vertex in the scene. In order to find the radiosity I use rtrace to find the Irradiance at each vertex. I fire a ray from each vertex in the direction of the normal. However I am a bit doubtful as to how I should find the radiosity from the irradiance. I am having the only the color of the face and no material properties. Since all the faces in my scene are diffuse I assume the color to be the diffuse reflectance function. I therefore multiply the irradiance with the reflectance function to get the radiosity which I use for display. Is this method correct? Can you suggest as to how I can compute the radiosity from the irradiance if I am having the material properties or the BRDF data? 2) My next doubt is does rtrace return the values in RGB space or in HSV space because I am at present assuming that it returns the values in RGB space. 3) Before my scene is taken up by the rtrace program I mesh the scene suitably so that the lighting appears smooth. So I use some triangulation such as Delaunay triangulation to mesh the scene. However I knowe that Radiance also meshes the scene. Therefore can you please tell me as to how I can control the meshing done by Radiance. In my case I may want Radiance to override the meshing part because I am already doing it. 4) Can you suugest as to what parameters I should use for rtrace so that it yields the best results. At present I am using rtrace in the following fashion : rtrace -av 0.1 0.1 0.1 -ab 5 -ad 5 -as 5 -lr 5 -aw 4 -st 0.1 -dv -I\ -h -w temp.oct < rayinput.dat > radiance.out Here rayinput.dat is the input file of ray org. & dir. the output is kept in radiance.out 5) Finally Can you please send me an example file in which you apply texture to one of the polygons in the scene. I want to know as to how this is done. A simple example with one polygon and one RGB image as texture would suffice. I know that I am asking a lot but please do find time to answer my doubts. It would help me in my project greatly if you could reply quickly. Thanking You. - Mahaboob Ali ( bits1@teil.soft.net ) Date: Thu, 17 Oct 96 09:51:03 PDT From: greg (Gregory J. Ward) To: bits1@teil.soft.net Subject: Re: rtrace and meshing Hello Mahaboob, I will do my best to answer your queries.... > 1) I am computing the radiosity at each vertex in the scene. > ... Your current method is sound. The radiosity is in fact the diffuse irradiance multiplied by the diffuse reflectance. Radiosity for non-diffuse surfaces is undefined, so I cannot recommend a method for computing vertex radiosities considering specular reflection. Your renderings wouldn't work anyway, since you are relying on the radiosities being the same from all directions, which cannot be the case with specular surfaces. > 2) My next doubt is does rtrace return the values in RGB space or > in HSV space because I am at present assuming that it returns the values > in RGB space. All computations in Radiance are carried out and represented in RGB space. > 3) Before my scene is taken up by the rtrace program I mesh the scene > ... Radiance does not do any explicit meshing, and it is not possible to control how Radiance samples the scene at this level. All you can do is adjust the rendering parameters to achieve denser (more accurate) sampling or sparser (less accurate) sampling. The placement of samples is dictated by accuracy requirements rather than absolute spacing, which is one of the main advantages of Radiance over traditional radiosity codes. > 4) Can you suugest as to what parameters I should use for rtrace so > ... These parameters will most certainly produce poor results. In particular, the value of -ad 5 is much too low to get an accurate sampling of indirect diffuse radiation. You really should use the "rad" program, with its more intuitive control variables, to come up with options for rtrace. To do this, read the rad man page carefully, set up your rad input file, then set an OPTFILE=render.opt line in it and run: rad -v 0 myscene.rif rtrace @render.opt -I temp.oct < rayinput.dat > irradiance.out > 5) Finally Can you please send me an example file in which you apply > texture to one of the polygons in the scene. I want to know as to how > this is done. A simple example with one polygon and one RGB image as > texture would suffice. This doesn't seem related to your project, since textures have no direct effect on irradiance values. Nevertheless, here is a simple example of applying a "texture" (which we call a "pattern") in Radiance: void colorpict oakfloor_pat 9 red green blue oakfloor.pic picture.cal tile_u tile_v -s 1.16670 1 .578313253 oakfloor_pat plastic wood_floor 0 0 5 .2 .2 .2 .02 .05 This was taken from the draft Radiance user's manual by Cindy Larson. Wiley is publishing a book on Radiance, which will hopefully be on the shelves by next summer, and this will offer a much more extensive guide to the software. In the meantime, please avail yourself of the materials that exist in the ray/doc directory, which includes this manual (called "usman1.doc"). I hope this helps! -Greg ================================================================= PROCEDURAL TEXTURES Date: Thu, 10 Oct 1996 13:25:28 -0400 (EDT) To: greg@hobbes.lbl.gov (Gregory J. Ward) From: jedev@visarc.com (John E. de Valpine) Subject: Procedural Textures Greg: As I understand it texfunc perturbs the surface normal for a material by modifying the normal with the results of the xfunc yfunc zfunc in some texture.cal file. This results in what I understand as 2-d bump mapping. Is there any way to do displacement mapping where the normal at the rayhit may or may not be displaced according to some set of parameters. I am trying to simulate the effects of concrete formwork, ie a panels size and the holes for form ties within a given panel. I made a procedural material that accomplishes this as a colorfunc. But I would like to be able to achieve 3d effects using displacement mapping. On a similar note, in writing code for procedural materials, is there a time/evaluation trade off between the two folowing: Tx = arg(1); Ty = arg(2); vs. T(xy) = select(xy,arg(1),arg(2)); How does the evaluation occur? Everytime Tx or Ty is evaluated is arg(n) executed or are Tx and Ty instantiated on first evaluation? -Jack Date: Thu, 10 Oct 96 10:53:59 PDT From: greg (Gregory J. Ward) To: jedev@visarc.com Subject: Re: Procedural Textures Hi Jack, There is no displacement mapping in the current release of Radiance, nor is any expected in the forseeable future. This is because doing displacement mapping correctly is HARD! My only suggestion is if you really want this effect, that you generate the geometry for one panel and put it in an octree to instantiate throughout your scene to keep the memory costs down. The additional time spent ray tracing should not be that great, especially if you instantiate the little holes instead of the whole panel. (You'll have to cut holes in the panels of an appropriate size, but this won't add too much time so long as they are rectangular.) As for your procedural materials, the first method of assigning separate variables is faster because variables are only evaluated once during each ray execution cycle, whereas a function is reevaluated on each call. -Greg ================================================================= MIST Date: Wed, 8 Jan 1997 18:41:37 -0500 (EST) From: Mark Stock To: radiance-discuss@hobbes.lbl.gov Subject: Mist Question Hello, I have a simple question: Why can I not get a mist sphere to rpict properly with a user-defined albedo? void mist cloudstuff 1 sky 0 6 0.005 0.005 0.005 0.8 0.8 0.8 cloudstuff sphere cloud1 0 0 4 -100 -150 100 180 ...plus other stuff in the scene rpict -vp -700 -700 20 -vd 1 1 0.2 -ab 1 scene.oct > image1.pic rpict: fatal - bad arguments for mist "cloudstuff" rpict: 8904 rays, 22.85% after 0.001u 0.000s 0.001r hours on ...... The mist material works when only the extinction coefficients are used. Any help will be appreciated! Thanks! Mark Stock mstock@engin.umich.edu Date: Wed, 8 Jan 97 15:50:15 PST From: greg (Gregory J. Ward) To: mstock@engin.umich.edu Subject: Re: Mist Question Hi Mark, There was a bug in release 3.0 regarding mist arguments, that has been fixed in a patch. Be sure to install all the patches in the /pub/patch directory on hobbes.lbl.gov. The process has been automated to make it easier. I'm not positive, but I'm pretty sure that putting "sky" as your scattering source is not going to work. This is only meant for normal, non-glow sources. This should work anyway without it to give you a spherical cloud. Do you still want me to post this to the rest of the group, or shall I just stick it in the digest for the next distribution? -Greg Date: Wed, 8 Jan 1997 18:55:01 -0500 (EST) From: Mark Stock To: greg@hobbes.lbl.gov Subject: Re: Mist Question Greg, Naw, I don't think there's a need to post it to the group, I just hadn't realized that there were patches that weren't installed on our system here. If you've recieved enough questions about it, though, you may want to. Thanks for the quick reply! Mark Stock mstock@engin.umich.edu ================================================================= MKILLUM QUESTIONS Date: Tue, 7 Jan 97 10:10:58 GMT From: milan@esru.strath.ac.uk (Milan Janak) To: greg@hobbes.lbl.gov Dear Greg, First of all I would like to wish you a Happy New Year. I am once again back here in ESRU Glasgow working on couple European Daylighting and Glazing research project. A part of this work involves detailed simulation of the daylight linking controls by run-time coupling of the thermal simulation (ESP-r) and the lighting simulation (Radiance). Here I would appreciate to get little more insight to the "mkillum" treatment of the highly directional light sources such as a sun or any virtual light sources by specular reflection from external surfaces. I have carry out simple test with sky with sun and calculated window light distribution (window facing sun) by mkillum and then couple of internal point illuminances (e.g. 13000 lx and 476 lx) Then I have deleted sun from the model and repeated the calculation of the internal point illuminance with the same window light distribution calculated previously (model with sun) (e.g. 2000 lx and 475 lx). The results suggest to me that: mkillum does not "map" such highly directional sources (e.g. sun) into the light distribution? These are probably moved to the direct calculation as that is in my opinion the most effective way to handle them. I am right to think that light distribution calculated by mkillum will not contain direct and specular contributions to the window plane? I would suppose that for direct and specular contribution mkillum window model reverts back to the primary material e.g. glass to calculate these direct contributions into the internal illuminance? With best regards, Milan Janak, ESRU University of Strathclyde, Glasgow. Date: Tue, 7 Jan 97 09:32:05 PST From: greg (Gregory J. Ward) To: milan@esru.strath.ac.uk Subject: mkillum Hi Milan, Yes, you are correct. Mkillum only computes the directional diffuse component of light coming in through a window system -- the specular (beam) component is best handled by the default algorithm. Actually, if you are only computing a few point values, there is no sense in using mkillum at all. You are better off just using the ambient calculation unless you plan to produce one or more renderings. I can provide you with draft chapters from the Radiance book explaining all this, if you have a PDF viewer. I am very busy this week, working on a Siggraph paper. You might do better to write back to me after the 13th. -Greg Date: Wed, 26 Mar 97 14:12:39 GMT From: milan@esru.strath.ac.uk (Milan Janak) To: greg@hobbes.lbl.gov Subject: Light shelf-ceiling and mkillum Dear Greg, I would appreciate very your help with following: As explained in your paper "The RADIANCE Lighting Simulation System" it is a good idea to treate illuminated part of the ceiling (e.g. from reflecting light shelf in my case) as illum source with precalculated light distribution. As you said, this (ceiling) is in reality important light source. Up till now everything is clear. Its starts to be little more difficult to get it right in all details. The best probably is to give example: Let's say we have room with external specular light shelf and want to assess its performance under overcast and clear sky conditions. So firstly we run mkillum to precalculate light distribution for external windows. So as mkillum will include only diffuse directional contributions, for overcast sky it will map all external contributions (also from specular surfaces??) but for clear sky conditions (with sun) it will omitt beam contributions from sun or reflected from specular light shelf, as these are handled by default calculations. Then we build up octree with precalculated light outputs for external windows. Secondly, we run mkillum for part of illuminated ceiling by light reflected from light shelf. So for overcast sky, ceiling will actually see only external window's illum light sources as there should not be any beam contributions? For clear sky conditions there will be additional beam contribution from sun reflected in light shelf which will be now included in ceiling illum light output? Base on my understanding, secondary light sources are not participating in specular and diffuse sampling (they revert to the original material) and therefore mkillum for ceiling should be calculated with -ab 0, as otherwise there would be double counting for ambient contribution from this part of the ceiling??? Thank you ones again very much. Milan, ESRU, Glasgow. Date: Wed, 26 Mar 97 09:27:29 PST From: greg (Gregory J. Ward) To: milan@esru.strath.ac.uk Subject: Re: Light shelf-ceiling and mkillum Hi Milan, Your understanding of the secondary light sources in Radiance using mkillum seems to be quite good, and I'll be the first to agree that this is a very confusing topic! Do not worry about double-counting in Radiance -- these things are pretty well taken care of so you don't have to think about it. If you compute the distribution from your ceiling with -ab 0, then the ceiling output will not include these interreflections, because illum's do NOT participate as their original materials in the indirect calculation. Therefore, I recommend that you set -ab at 1 or two when your run mkillum. Also, you can save on calculation later if you set d=0 in the mkillum scene input file for the ceiling if it is a diffuse material. I hope this helps. -Greg ================================================================= COLORED LIGHT SOURCES Date: Wed, 9 Apr 97 19:19:08 BST To: greg@hobbes.lbl.gov Subject: Lamp colors in Radiance RGB format From: Jeff.Shaw@arup.com My name is Jeff Shaw, and I work with Steve Walker and Andy Sedgwick for Ove Arup & Partners in London. I have been doing a lot of work with Radiance lately, and Steve thought you may be able to offer some advice on a recent problem that I have encountered. Essentially what I am attempting to do is to accurately model the color of the lamps in some luminaires which I have modelled in Radiance for a particular project. So far I hove gone about this in the following way: I have started with the Spectral Power Distributions of the fluorescent lamp phospors which I am interested (provided to me by a lamp manufacturer). Using spreadsheet, I have converted these tables of values (between 400nm and 760nm) producing, for each lamp color, a single set of x and y CIE chromaticity chart coordinates. I have used the CIE 1931 color-matching Functions (Distribution Coefficients) to do this, and am confident that this process has given me reasonable output. The problem I have is how then to proceed with the conversion of these x and y color chart coordinates into RGB values readable by Radiance which produce an accurate color for each lamp. I first attempted this using an in-house C script (rgb.c - attached for your information) I believe written some time ago by a colleague who is no longer here. The script, however seems incomplete and only converts to RGB according to NTSC standard rather than CIE standard. As it is, I used the script to covert my x and y color chart coordinates to RGB values. For instance: For a 3500K fluorescent lamp: x = 0.4022 ; y = 0.3639 > R = 0.4531 ; G = 0.3251 ; B = 0.2219 For a 5400K fluorescent lamp: x = 0.3318 ; y = 0.3478 > R = 0.3238 ; G = 0.2987 ; B = 0.3775 When I applied these values to my Radiance scene and viewed the resulting octrees, the 3500K lamp seemed too yellow/pink and the 5400K lamp seemed much too purple, as you may expect. This could be for three reasons: 1) The colors really are like that, the brainm just perceives them differently in real life. 2) The monitor is distorting/not properly showing the colors. 3) The RGB values are wrong. Steve and I had a look at the new ra_xyze script which you wrote, and when we applied it to our pictures with the -r -p options (using the values on your man page for -p) the picture did appear slightly whiter as we wanted. But we are not entirely sure what ra_xyze does and whether we used it correctly. Sorry to write a rather long and rambling email, but I hope I adequately explained the situation. We thought that as you have been doing a lot of work on color recently for Version 3.0 of Radiance, you may be able to help. I would appreciate any comments or suggestions that you have. Regards, Jeff Shaw jeff.shaw@arup.com Date: Wed, 9 Apr 97 11:39:42 PDT From: greg (Gregory J. Ward) To: Jeff.Shaw@arup.com Subject: Re: Lamp colors in Radiance RGB format Hi Jeff, Indeed, Radiance RGB values are not the same as NTSC RGB. There are routines within Radiance for converting from CIE colors to Radiance RGB values, but it is probably easiest just to use the "lampcolor" program, providing it with your own lamp table with CIE (x,y) chromaticities. This table is explained a bit in the manual page for ies2rad, and I can explain it further if it doesn't make sense to you after reading that and looking at the default "lamp.tab" file included in the ray/src/cv directory. Failing this, you can use the .cal file ray/src/cal/cal/xyz_rgb.cal to convert between XYZ and RGB values. Again, let me know if you need help with this or the "calc" or "rcalc" programs. The resulting image will not look correct unless you adjust it for your particular monitor primaries, unless they just happen to match the canonical ones chosen for Radiance. (This was the idea.) To do this, you will have to either measure the monitor primaries or obtain this data from the manufacturers. Finally, I think you may be disappointed in your results for the other reason you mentioned, which is that the eye peforms an unconscious color (white) balancing operation when we view a scene. This is why most color video cameras have a white balance feature, also, and color films are selected based on the expected illumination source. Otherwise, all pictures taken under incandescent lighting would look orange, and/or outdoor images would look bluish. Again, we don't perceive them that way, because our eye/brain system compensates for the prevalent illumination over a wide range of dominant spectra. This is why most renderings are performed with equal-energy white illuminants (i.e., RGB all equal), because a final white-balancing operation would divide by the dominant illuminant color, yielding the exact same result. (See the -t option to pfilt.) The only case where it really makes sense to use different lamp colors is when you are mixing different types of illumination in the same environment, or letting in daylight with incandescents, etc. (I assume this is what you are doing, or you wouldn't be asking.) Unfortunately, in this case, white balancing is problematic, since there is no one dominant spectrum to divide the result by. I hope this is helpful to you. By the way, I recently got some bounced mail from "radiance@arup.com" -- is this alias no longer in action? I removed it from our mailing list, but if you are still using Radiance there, then you probably want to be on the list. Shall I put your e-mail on the list, instead? -Greg RADIANCE DIGEST Volume 3 Number 1 The release of this digest corresponds with the release of the next version of Radiance--v. 3.1.1 to be announced in a subsequent e-mail. -Chas ------------------------------------------------------------- Index of Topics PDFBLUR PERSPECTIVE VIEW DISTORTION PHOTOMETRICS RADIANCE LIMITATIONS SIMULATING DYNAMIC RANGE CALCULATING ILLUMINANCE ARCHICAD TO RADIANCE INPUT LANGUAGE BASICS GLARE CALCS WITH ADELINE RADIANCE AND RED HAT LINUX INSTANCES AND OCONV STAR PATTERNS ------------------------------------------------------------- PDFBLUR QUESTION To: CKEhrlich@lbl.gov Subject: PDFBLUR QUESTION. Hi Mr. Ehrlich, I have a quick question regarding the "pdfblur" command in radiance. To try the command I typed : 1% rpict -vf myview -x 640 -y 480 -z orig.zbf scene.oct > orig.pic 2% pdfblur 0.5 60 8 orig.pic | pinterp -B -vf myview -x 640 -y 480 orig.pic orig.zbf > blurry.pic Of course, "scene.oct" and "myview" are files already available. The problem is that after three days of computation on a Indigo2 the program was still running! I had to kill the process thinking something was wrong. I am trying to model the effect of camera Focal Length and aperture for various ray traced images. Thank you for your precious help. Sincerely, Jean-Sebastien ---------------------- Date: Wed, 28 May 1997 17:50:13 -0700 From: radiance (Radiance Maintenance) To: valois@cim.mcgill.ca, chas@pink (Charles Ehrlich) Subject: Re: pdfblur Hi J-S, I just ran pdfblur on a similar problem, and it did take a few minutes, but it finished and the results were what I expected. I don't know what the problem is. Perhaps you could check to make sure that the output of pdfblur is a sequence of VIEW= lines. Which version of Radiance are you running? -Greg ---------------------- Date: Thu, 29 May 1997 13:21:00 -0400 (EDT) From: Valois Jean-Sebastien To: Radiance Maintenance Subject: Re: pdfblur Greg! I would first like to congratulate you on your new position at SGI. I am sure your experience in computer graphics will contribute toward better hardware and software products from SGI. In that sense it is good new for all of us ;) . pdfblur for a few minutes ?!? Mine was running for three days without any success! Ok, I guess I don't know how to use it. Actually, I am combining pinterp and pdfblur (from Radiance v3.0) to simulate ZOOM. I used the example given in the man pages under pdfblur where it says: rpict -vf myview -x 640 -y ... pdfblur 0.5 57 8 orig.pic | pinterp -B ... Instead of "pinterp -B -vf orig.pic ..." I used "pinterp -B -vf myview ..." That must be it! I just realized the mistake. Anyway, I am only using ONE view point (VIEW) to run the simulation, the same view point that I am using to generate a "pin-hole" type of image. ######## Thank you. Sincerely yours, Jean-Sebastien ____________________________________________ | Jean-Sebastien Valois B.Eng | |````````````````````````````````````````````| | * Email: valois@cim.mcgill.ca | | * Web page (Where and how to reach me): | | http://www.cim.mcgill.ca/~valois | |____________________________________________| ---------------------------------------------------------- PERSPECTIVE VIEW DISTORTION Date: Sun, 01 Jun 1997 17:13:46 -0400 From: Morgan Larch To: radiance-discuss Subject: Naive question regarding -vp and -vd Hello all, I have finally gotten a chance to spend some time with radiance and am stumped with the relationship between -vp and -vd and the scenes that I have been working on. I have used torad (which is great) to pull out several sections from autocad to get a better visual on light and space. What I need to be able to is as *simple* as move up and down, front and back, and so on without incurring fisheye perspective distortions. I'm sure this is not so simple once you get into the details, but never mind the the details for now, I can figure that out as I get going. So, how do I use -vp and -vd to move up, down, etc... Humbled, and hoping for you indulgence, mlarch@ix.netcom.com P.S. I think that I am on the list but have not seen any traffic in the last month or so. If there has been traffic, please let me know so that I can re-request. -------------------------- Morgan, The trick to prevent vertical perspective distortion is to maintain a view direction that is paralell to the X-Y plane (assuming Z is up), and changing ones vertical view orientation with the "-vl" view option. For example, the following two view definitions are *roughly* similar: rview -vtv -vp 0 0 0 -vd 1 1 0.000 -vu 0 0 1 -vs 0 -vs 0.2 rview -vtv -vp 0 0 0 -vd 1 1 0.127 -vu 0 0 1 -vs 0 -vs 0 This topic is discussed in more detail in the Radiance digests which can be found in the doc directory of the Radiance distribution. I double-checked the discussion list, and your e-mail is on it. -Chas Charles Ehrlich Principal Research Associate Lawrence Berkeley National Laboratory ------------------------- PHOTOMETRICS To: ckehrlich@lbl.gov Subject: RADIANCE 3.0 Date: Thu, 29 May 97 16:22:17 +0200 Hi Chas, I am Vincent from the FACULTE POLYTECHNIQUE DE MONS. For my thesis I had to measure the illuminance value for a road during the night. The caracteristics of the luminaire are given by a manufacturer (Photometric data for 25 horizontal and 27 vertical angles). I have to create the IES file. It is a symetric luminaire for roads lighting. Actually, the results are very different of results given by a firm specialised in lighting. To obatin the values I use : ximage my.pic | rlux my.oct or I render the image with the -i option. Could you tell me if the geometry of the luminous area given in the IES is very important to use in RADIANCE ? In my case, the luminaire is an ellipsoid. Is the description of the sky important ? And the important question : Is RADIANCE valid to render with accuracy exterior scene ? Thank you for your rapid reply ( I had to present my thesis the 15th of june). Vincent DUVIVIER FACULTE POLYTECHNIQUE DE MONS BELGIUM E-mail: duvivier@motelec.fpms.ac.be Date: Thu, 29 May 1997 14:26:01 -0700 To: duvivier@motelec.fpms.ac.be Subject: Re: RADIANCE 3.0 Hello Vincent, If you have the raw candlepower distribution data, it is more convenient to simply create a Radiance data file which describes the output of the luminaire, than to first create an IESNA file and then translated it into Radiance. Unfortunately, until the publication of the Radiance book, this is one of least well documented features of Radiance. I recommend that you use a sphere made of material illum rather than an ellipsoid. The invisible illum sphere will be just large enough to completely enclose the visible geometry of the luminaire. If glare is important, then this geometry should also contain a surface which describes the visible output aperature of the luminaire, which would be modeled with material glow. The intensities of the glow and illum would be inversely proportional to the projected area (in meters) of each of their respective surfaces. The projected area of the illum sphere is a circle (of course) and the projected area of a mesh of polygons would be the sum of the polygons' areas. When you have these data available, use the lampcolor program to compute the R G B intensities for each surface. There will be no double-counting of the illuminance even though there is both a glow and an illum because the glow will have a radius of effect of zero, and any ambient rays that happen to go reach the illum sphere will be stopped before reaching the glow. Assuming the data is bi-laterally symmetrical, and since 180 is not evenly divisible by 24 and 90 is not even divisibly by 26, the data file will look something like: --------- 2 0 0 27 v1 v2 v3 ... v27 0 0 25 h1 h2 h3 ... h25 v1h1 v1h2 v1h3 ... v1h25 v2h1 v2h2 v2h3 ... v2h25 ... v27h1 v27h2 v27h3 ... v27h25 --------- V1 through v27 and h1 through h25 are the specific vertical and horizontal angles respectively at which the data is provided, and the v1h1 through v27h25 are the actual values at each angluar location. If the data you have is provided in the opposite order, then simply swap the two lines after the "2" and use the data as you have it. What is different about this format than most data file formats is that the abcisas and ordinates are not specified on each and every line of data. Lets assume the above datafile is called lum1.dat. This data would then be included into the radiance scene with a brightdata pattern thusly: ---------- void brightdata illum_sphere_pat 5 corr lum1.dat source.cal src_theta src_phi2 0 1 .95 # if depreciation factors need to be accounted for # then change .95 to the corresponding total accumulated # depreciation allowance. illum_sphere_pat illum illum_sphere_mat 0 0 3 R G B # intensities as provided by lampcolor illum_sphere_mat sphere illum_sphere_surf 0 0 4 X Y Z R # location and radius of sphere ----------- There are many other examples of how to do this provided with the digests in ray/doc/digest. I recommend that you browse them thoroughly to satisfy yourself that you are doing this correctly. You should also make sure that your luminaire is oriented correctly with respect to the roadway. To calculate illuminance values on the roadway, create a "plan" view of the road with a view definition like: rview -vtl -vp 0 0 1 -vd 0 0 -1 -vu 0 1 0 -vh 100 -vv 10 And use the -I parameter of rpict (or rview, or rtrace). When the final image is then viewed with ximage and the value of a pixel is queried with "L", then the resulting value will be in lux. If this is a nighttime view, the sky is not important. Radiance is accurate for interior and exterior scene. The point of least accuracy in this sort of analysis would be the material description of the surface of the roadway. But, since you are not calculating luminances, this will not affect your results. The problem with roadway surfaces is that there is self-shadowing--something not well treated by any existing material model. -Chas ------------------- Subject: RADIANCE 3.0 / luminaire Date: Mon, 09 Jun 97 17:11:47 +0200 Hi Chas, I am Vincent, Thank you very much for your reply about modeling luminaires. I have others questions : If the raw candlepower distribution data of the luminaire is in (cd/1000 lm) and the total output of the luminaire is 16500 lumens, is it correct to put 16.5 (or 16500) for 'total lamp lumens' when I use lampcolor and 1 (or 0.001) for the multiplier A1 of source.cal ? To obtain correct values (compared with measures made by a specialised firm) for the illuminance, I had to use 3.5 for the multiplier. Is it normal ? Thank you very much for your reply. Vincent DUVIVIER FACULTE POLYTECHNIQUE de MONS, BELGIUM email: duvivier@motelec.fpms.ac.be ----------------------- Date: Sun, 1 Jun 1997 21:44:23 -0400 (EDT) From: Navid Sadikali To: "Gregory J. Ward" Subject: Re: Radiance question: Tilting the filmplane? (fwd) Hi again. Thanks for your response about tilting the filmplane, by leveling the view, and then using view lift. Now, I am trying to understand exactly how the number given to view lift corresponds to a movement that in the 3D coordinates... For example, how much would I lift the view to exactly simulate the effect of tilting the film plane by a certain theta degrees? By choosing an arbitrary number such as 0.5 I can remove the convergences, but unless I fiddle I can't get the image centered in the same spot (ie. centered in the same spot that corresponds to the center of the non-converged picture). Maybe I should illustrate: | X | building | | \-- distperp --| eye |_____________ If the non-tilted picture is centered at X, how much do I lift the film plane, to center the image on X in the tilted picture? Note:If I lift the view by (X.vert - eye.vert) I don't get the correct image, even though I have set the view direction horizontal. Thanks every so much, Navid M.Math Student From greg Wed Jun 4 09:47:36 1997 Date: Wed, 4 Jun 1997 09:47:35 -0700 From: greg (Gregory J. Ward) To: Navid Sadikali Subject: Re: Radiance question: Tilting the filmplane? (fwd) Hi Navid, Using your diagram: | X | building | | \-- distperp --| eye |_____________ You need to specify a lift value that is the fractional image height you want to shift your view. The image height is in turn determined by your vertical view angle (-vv parameter). For example, if the top of your "unlifted" view just reached your desired view center (marked "X" in the above illustration), then you would specify a view lift of 0.5, since you want to move your image center half the frame height. To put it mathematically, your frame height is twice the tangent of half your vertical view angle in a relative coordinate system where your view vector defines the base of a triangle with unit length. All you have left to do is figure out what the lift should be using similar triangles: (X.vert - eye.vert)/distperp lift = ---------------------------- 2 * tan(vv_ang/2) If you ever want to use the angle instead of distances, just replace the numerator with the tangent of your vertical lift angle. A similar sort of formula applies for horizontal view shift, if you ever need that. Hope this helps. -Greg From chas@pink Fri Jun 13 13:21:36 1997 Date: Fri, 13 Jun 1997 13:18:37 -0700 From: chas@pink (Charles Ehrlich) To: MRINALINI D SHARMA Subject: Re: Adeline Cc: radiance@pink --------------------------------------------------- SHADOW QUANTIFICATION STUDY Date: Fri, 13 Jun 1997 12:00:40 -0700 (PDT) From: MRINALINI D SHARMA To: ckehrlich@lbl.gov Subject: Adeline June 13, 1997 Dear Mr. Charles Ehrlich, I am an assistant a the Environmental Simulation Laboratory, College of Environmental Design, Berkeley. I was wondering if ADELINE is capable of coducting a shadow quatification study. In essence is it able to calculate shadow areas given a lighting situation. Your prompt reply would be appreciated. Thankyou for your time. sincereley Mrinalini Devi Sharma ---------------------------------------- Dear Mr. Sharma, Yes. ADELINE/Radiance can perform a shadow quantification of an arbitrary lighting and geometry situation. The way to perform this is not obvious, however. First problem: getting the geometry together. I recommend using AutoCAD with the torad or radout export utilities. Second: geting the lighting distribution data from mfgr. Third: assembling the materials and lighting in the Radiance input text files (no graphical user interface for this task). Fourth: Define a "plan" view of the area to be quantified. The view type for the rendering program is "-vtl". See the manual pages for rpict for more information. Fifth: Calculate the image with sufficient resolution and analyze the results with pvalue. You would then have to write a simple script that would count the number of pixels in the image that have a value less than the ambient value of your scene (this can be zero with -av 0 0 0). Or, the image file could be converted to some other format (like tiff) and analyzed with any other convenient software which can query and count pixels. So, ADELINE/Radiance can do it, but not exactly with ease. -Chas Charles Ehrlich Principal Research Associate Lawrence Berkeley National Laboratory ------------------------------------------------ RADIANCE LIMITATIONS From urge@shore.net Tue Jul 1 11:48:54 1997 Date: Tue, 01 Jul 1997 14:46:30 -0700 From: "R.J. Russell" Organization: Continental Design To: gregl@asd.sgi.com Subject: Radiance restriction general question First, if this not the appropriate place to ask tech questions about Radiance, please direct my there. If you don't mind answering a few general questions I have about Radiance, I'd appreciate it. On the web page, under "Restrictions of the Complexity of the Problem" it says that: Although Radiance attempts to account for all significant sources of illumination, there are a few cases that are not adequately modeled in the present calculation. The most important of these is the reflection of intense light from curved specular surfaces, such as might be found within a heliostat or parabolic light fixture. Computing light from such a system requires a ray tracing method that follows light in the forward direction, starting at the emitters and working outward. Since Radiance works strictly in the reverse direction, a separate preprocess is necessary to compute these output distributions. This hypothetical preprocessor is not included in the present package. I have a client that designs automotive taillamps, and they want to be able to see the lit appearance of their lamps in a rendered image. The majority of the light transmitting through the lens surface is reflected off the curved, basically parabolic reflector surface behind the bulb (essentially the same as a flashlight). Does the quoted restriction refer to a problem like this? If so, can you point me to any resources on the web or elsewhere that may have addressed this? My client is VERY interested in doing this, so if it has not been done already they may want me to develop the mentioned "seperate preprocess". Could you briefly explain the major issues involved with that, or outline the basic steps that such a program would have to tackle? Any info is very appreciated. R.J. Russell urge@shore.net Continental Design -------- >From gregl@radiate.asd.sgi.com Tue Jul 1 12:49:39 1997 From: gregl@radiate.asd.sgi.com (Greg Larson) To: "R.J. Russell" Subject: Re: Radiance restriction general question Hi R.J., Future queries about Radiance should be addressed to "radiance@floyd.lbl.gov". Indeed, the restriction mentioned does apply to your case. I cannot offer much advice on writing a forward ray-tracer, except to say that it is about a 6 man-month effort for someone who is already an expert in the field. It is not a task to be tackled lightly. I can recommend a program that already performs such a function, called ASAP by Breault Research Organization (http://www.breault.com/). I suggest that you check this out. Radiance assumes that you have some way of measuring or otherwise characterizing the output of all light sources in a scene. If you can measure the output distribution of your tail-lights, then you can certainly simulate scenes containing them. If your goal is to predict the tail-light output, however, you need a program like ASAP. Best of luck. -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ---------------------------------------------- SIMULATING DYNAMIC RANGE From valois@cim.mcgill.ca Mon Jun 23 08:29:58 1997 Date: Mon, 23 Jun 1997 11:25:43 -0400 (EDT) From: Valois Jean-Sebastien To: Radiance Account Subject: Gray Scale Response Hi Greg, How are you? Thanks for your last email. Here is another (quick) question for you. Is there a command/function that allows to stretch or crop the intensity ranges for an image ? For example, I would like to model the response of a camera that can resolve a 10-step (32:1 contrast ratio) under a minimum scene illumination of 10 footcandles. Also, I know that the gray scale capability can extend to a 12,000 footcandle scene illumination. Sincerely yours, Jean-Sebastien ____________________________________________ | Jean-Sebastien Valois B.Eng | |````````````````````````````````````````````| | * Email: valois@cim.mcgill.ca | | * WWW : http://www.cim.mcgill.ca/~valois | | * Tel. : (281) 244-7271 | |____________________________________________| < or stated another way > From valois@cim.mcgill.ca Tue Jun 24 09:14:53 1997 Date: Tue, 24 Jun 1997 12:10:38 -0400 (EDT) From: Valois Jean-Sebastien To: Radiance Account Subject: Dynamic Range Hi Greg! Me again. 1. I guess what I was trying to ask yesterday was: "How do I change (or simulate) the dynamic range in Radiance ?" Instead of a 10^30:1 ratio, I would like to see the effect of a 100:1 ratio for a given Exposure. I guess a special filter would do the job, but which one ? The way I see it, the dynamic range is related to the resolving power that is a function of the Exposure. Is that right ? 2. Would it be possible to use the function: Radiance EXPOSURE = K*T*S/f^2. for a CCD camera when, instead of taking S as the film speed (ISO), I take the CCD efficiency ? Thanks for your time. Sincerely, Jean-Sebastien ____________________________________________ | Jean-Sebastien Valois B.Eng | |````````````````````````````````````````````| | * Email: valois@cim.mcgill.ca | | * WWW : http://www.cim.mcgill.ca/~valois | | * Tel. : (281) 244-7271 | |____________________________________________| ------------- From radiance Wed Jun 25 12:34:29 1997 Date: Wed, 25 Jun 1997 12:34:28 -0700 From: radiance (Radiance Account) To: Valois Jean-Sebastien Subject: Re: Gray Scale Response Cc: radiance Jean-Sebastien, The best (quick) answer I can give you is to explore the use of the "pcomb" program. It allows you to specify an arbitrary function on the command line which provides the functionality to perform pixel-by-pixel operations. Be sure to browse the rayinit.cal file for useful functions to use. There have also been examples of the use of the pcomb program in the digests. -Chas Charles Ehrlich Principal Research Associate Lawrence Berkeley National Laboratory ------------------------------------------------ CALCULATING ILLUMINANCE From duvivier@motelec.fpms.ac.be Thu Jun 12 03:53:14 1997 To: chas (Charles Ehrlich) Cc: duvivier@motelec.fpms.ac.be Subject: Re: RADIANCE 3.0 / luminaire Date: Thu, 12 Jun 97 12:51:41 +0200 Hi Chas, I am Vincent Thank you very much for your rapid reply. Your are great. I don't know what is my problem with the luminaire. The scene is an exterior nighttime view. The portion of road is 15m X 4m (origin at 0,0). The 5 luminaires (GEMA from COMATELEC, ellipsoid, high-pressure sodium Philips SON-T 150W, 16500 lumens) are placed every 15m (starting at x=-15 m to x=45m) at y=5.5m and z=5m. 1) Simple description of the luminaire xxxxxxx * xxxxxxxxxxx .523m x = not emitting area lllllllllll l = emitting area lllllll * < .7m > 2) PROGRAM lampcolor : lamp type : HID length unit : meter lamp geometry : ring disk radius : .35 total lamp lumens : 16500 lamp color (RGB) = 134.387528 49.117057 1.253024 3) Assuming that the candlepower distribution file (cd/1000 lm) is correct (vertical angle 0-90 degrees and horizontal angle 0-355 degrees,), the description of the scene is : ### Description of the road ### Material : Beton (not important for the illuminance) void plastic beton 0 0 5 0.216 0.145 0.106 0.192 0 ### Geometry beton polygon road 0 0 12 0 0 0 0 4 0 15 4 0 15 0 0 ### Description of the luminaire void brightdata illum_sphere_pat 5 corr gema.dat source.cal src_theta src_phi2 0 1 0.0035 (not 0.001 if I want the correct values) illum_sphere_pat illum illum_sphere_mat 0 0 3 134.387528 49.117057 1.253024 illum_sphere_mat sphere illum_sphere_surf1 0 0 4 -15 5.5 5 .35 illum_sphere_mat sphere illum_sphere_surf2 0 0 4 0 5.5 5 .35 illum_sphere_mat sphere illum_sphere_surf3 0 0 4 15 5.5 5 .35 illum_sphere_mat sphere illum_sphere_surf4 0 0 4 30 5.5 5 .35 illum_sphere_mat sphere illum_sphere_surf5 0 0 4 45 5.5 5 .35 4) gema150.rif file : ZONE= I -16 46 0 6 0 6 scene= scene.rad view= v1 -vtl -vp 7.52404 2.41832 1.48918 -vd 0 0.0099995 -0.99995 \ -vu 0 0 1 -vh 22.5 -vv 22.5 -vo 0 -va 0 -vs 0 -vl 0 render= -i -dt 0 -lw 0 -lr 12 -ad 1024 -ab 2 -x 1024 -y 1024 5) rad gema150.rif ximage gema150_V1.pic The values are obtained from interactive clicks (L) on the picture. This was the process to evaluate the illuminance. Could you tell me if it seems correct ? Thank you very much. Vincent DUVIVIER Rue Trieu des Dames, 24 5190 Jemeppe-sur-Sambre Belgium email: duvivier@motelec.fpms.ac.be (until july of 1997) ---------------------------------------------------------- Vincent, I appologize for my late reply. I've been way over-worked lately. I started to look at your scene and realized that I don't have your data file. If you e-mail it to me, I'll be able to reproduce your results. Meanwhile, you can try two changes to your scene while you wait for my reply. 1. In scene.rad change: void brightdata illum_sphere_pat 5 corr gema.dat source.cal src_theta src_phi2 0 1 0.0035 to: void brightdata illum_sphere_pat 5 flatcorr gema.dat source.cal src_theta src_phi2 0 1 0.001 Flatcorr is the correct correction function for planar surfaces such as rings. Corr is used for spheres and surfaces which can be simplified as infinitely distance like the "source" surface. 2. In the rif file change: render= -i -dt 0 -lw 0 -lr 12 -ad 1024 -ab 2 -x 1024 -y 1024 to: render= -i RESOLUTION= 1024 1024 QUALITY= MEDIUM OPTFILE= scene.opt OCTREE= scene.oct The -x and -y params were taken out because this causes problems when you try an interactive viewing with rview. RESOLUTION takes care of that. QUALITY=MEDIUM makes the resolution render at twice the specified RESOLUTION value to reduce the jaggies, and takes care of the other necessary options. To do an interactive view: rad -o x11 file.rif Use the "t" command to trace a ray into the scene. Press return a few times and it will show the average value, which will be illuminance because of the "-i" in the render= line. Read the rview manual for more commands. If you have no other surfaces in your scene except the ground and the light, you don't need any ambient bounces, so you could make -ab = 0. I added the -I option so that you could use the scene.opt options file with rtrace for a more rigorous analysis with rtrace. Same goes for the OCTREE= definition. rtrace @scene.opt -dv- -h- -x 1 scene.oct < scene.pts \ | rcalc -e '$1=47.4*$1+120*$2+11.6*$3' \ > illum.val scene.pts contains: 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 0 1 0 1 1 0 0 1 for a simple four-point analysis. First three digits are the "sensor" location x y z, and the next three are the "sensor" orientation x y z. Scene.val will contain four values corresponding in sequence to the four locations above. Rcalc with its options takes the average of the R G B calculated values. -Chas ----------------------------------------------------------------------- ARCHICAD TO RADIANCE From paul@bourke.gen.nz Wed Jul 2 16:02:15 1997 Date: Thu, 3 Jul 1997 09:00:52 +1000 To: greg (Gregory W. Larson) From: Paul Bourke Subject: Re: ArchiCAD2rad >We looked and looked, and could not find the archicad to Radiance translator. >I guess it's time to ask Paul Bourke if he still has it, or knows where it is. >His e-mail address is "pdb@mhri.edu.au" -- and please include me in your >discussion. It can be found at http://www.mhri.edu.au/~pdb/software/ I've sent the URL off to Henry. How are things anyway? I never actually heard what you were doing at SGI? As it turns out I'm using SGI gear all the time now, we have Indigo-2 Max Impacts for workstations and a lovely 12 processor power challenge as our CPU server. Even better when you consider there are only 3 of us using it! I'm involved almost exclusively in scientific visualisation and still find lots of uses for Radiance. I think I asked earlier whether you'll be at Siggraph, if so, I might see you there. --------------------------------------------------------------------- Paul Bourke pdb@mhri.edu.au Brain Dynamics Research Unit http://www.mhri.edu.au/ Mental Health Research Institute Ph: 61 3 9389 2602 Locked Bag 11, Parkville Fax: 61 3 9387 5061 Victoria 3052, Australia --------------------------------------------------------------------- INPUT LANGUAGE BASICS From mlarch@ix.netcom.com Wed Jul 2 06:59:16 1997 Date: Wed, 02 Jul 1997 09:53:09 -0400 From: Morgan Larch To: radiance-discuss Subject: first colorfunc's and the disappear'd prmitives This may well be yet another silly question, but ... My first try at colorfunc's has resluted in the disappearance of my primitives. I built a .cal file, s.cal, with the fallowing: red = Px*A1; green = Py*A2; blue = Pz*A3; and a rad file, c.rad, with the fallowing: void plastic white 0 0 5 1 1 1 0.01 0.01 void colorfunc hat 4 red green blue s.cal 0 3 1 1 1 void light bright_white 0 0 3 20 20 20 bright_white sphere l1 0 0 4 0 0 20 4 hat sphere s1 0 0 4 0 0 0 2 hat polygon floor 0 0 12 -10 -10 0 -10 10 0 10 10 0 10 -10 0 After oconv c.rad > c.oct, rivew ( with -vp 0 -20 10 -vd .30 .95 -.34 -av .2 .2 .2) shows me nothing. If I replace hat with the plastic material white, I can see what I'd expect, a polygon intersecting a sphere. I read through the usman1.rtf and usman2.doc and looked over several example files and thought I had connected all the dots. But I guess not. What am I not getting ? Thanks, mlarch@ix.netcom.com ---------------------- From radiance Sun Jul 6 23:48:40 1997 Date: Sun, 6 Jul 1997 23:48:39 -0700 From: radiance (Radiance Account) To: Morgan Larch Subject: Re: first colorfunc's and the disappear'd prmitives Morgan, You're almost there! The only concept of the Radiance input language that you overlooked was that a pattern by itself is not sufficient to describe the surface material properties of an object. The pattern must "modify" a material like plastic. So the colorfunc pattern must appear before the white plastic, and the white plastic must be modified by the "identifyier" of the colorfunc (hat). In this way, the white plastic "inherits" the properties of the previous modifier(s). void colorfunc hat 4 red green blue s.cal 0 3 1 1 1 hat plastic white ^^^ 0 0 5 1 1 1 0.01 0.01 void light bright_white 0 0 3 20 20 20 bright_white sphere l1 0 0 4 0 0 20 4 hat sphere s1 0 0 4 0 0 0 2 hat polygon floor 0 0 12 -10 -10 0 -10 10 0 10 10 0 10 -10 0 That should do it. -Chas ------------ From mlarch@ix.netcom.com Mon Jul 7 10:45:30 1997 Date: Mon, 07 Jul 1997 13:40:29 -0400 From: Morgan Larch To: Radiance Account Subject: Eureka ! Re: first colorfunc's and the disappear'd prmitives, thanks Thanks Chas, now that I stop and look at it, it makes quite a lot of sense and a whole lot of difference! mlarch@ix.netcom.com ------------------------------------------------------------- GLARE CALCS WITH ADELINE From haico.schepers@arup.com Tue Jul 8 16:17:53 1997 Date: Tue, 08 Jul 1997 22:53:08 +0000 From: Haico Schepers To: chas Subject: Glare Uuugh Chas I seem to be able to get some numeretical results now that rayinit is located in the same directory as Glare.exe. Just some additional questions to those in the last email; 1/ I can't find xglaresrc to run the visualization program for Glare 2/ How important is the warning that I am missing 80% of samples to the reliablity of the results 3/ I am working on Daylighting glare with large sources, has the program been evaluated for this? If there is anyone else that can help me on these questions please forward their email accounts. Thanks for your help haico -------- From greg Wed Jul 9 10:23:43 1997 Date: Wed, 9 Jul 1997 10:23:42 -0700 From: greg (Gregory W. Larson) To: chas Subject: Re: Can you help with this one? Cc: haico.schepers@arup.com Chas/Haico, > 1/ I can't find xglaresrc to run the visualization program for Glare Doesn't exist in ADELINE, since it's tied to X11. > 2/ How important is the warning that I am missing 80% of samples to the > reliablity of the results Shouldn't be a big deal. What it means is that there is information on the border of your sample space that is unaccounted for. If you're concerned that the borders contain important glare information, then you should run findglare with the rtrace option, though I'm not sure if this is supported in ADELINE. The other thing you could do is render a wider angle view to give to findglare. > 3/ I am working on Daylighting glare with large sources, has the program > been evaluated for this? There has been some work on this done by the folks at the LESO in Switzerland, but I don't think it's been integrated back into ADELINE. In general, the glare indices provided are appropriate for small sources, though they have been applied by people to other cases. I'm just not sure how valid they are. -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ----------------------------------------------------------------------- RADIANCE AND RED HAT LINUX From jbi@saturn.dmu.ac.uk Thu Jul 10 01:35:33 1997 Date: Thu, 10 Jul 1997 09:35:37 +0100 From: jbi@dmu.ac.uk (Joven Ignacio) To: rogerc@indigo.ie Subject: Re: Radiance/Linux (Red Hat) Dear Roger If you still haven't sorted out the problem with Radiance installation on the Linux (RedHat) version, the solution is as follows. You have to specify where your include and library files are located under the X11R6 directory (RedHat's latest version for X11). When you install Radiance, you will have to change the settings on the makeall script, see below: <... cut ...> Current rmake command is: #!/bin/sh exec make "SPECIAL=" \ "OPT=-O2" \ "MACH=-DBSD -DSPEED=40 -DDCL_ATOF -Dqsort=_quicksort -DBIGMEM" \ ARCH=IBMPC "COMPAT=malloc.o erf.o getpagesize.o" \ INSTDIR=/usr/home/bin \ LIBDIR=/usr/home/lib \ CC=gcc "$@" -f Rmakefile ... cut ... >>>>>>>>>>>>>>>>>>>. To correct the error, use the following. #!/bin/sh exec make "SPECIAL=" \ "OPT=-O2" \ "MACH=-DBSD I-DSPEED=40 -DDCL_ATOF -Dqsort=_quicksort -DBIGMEM -L/usr/X11R6/lib -I/usr/X11R6/include" \ ARCH=IBMPC "COMPAT=malloc.o erf.o getpagesize.o" \ INSTDIR=/usr/home/bin \ LIBDIR=/usr/home/lib \ CC=gcc "$@" -f Rmakefile >>>>>>>>>>>>>>>>>> It's because RedHat has both an X11 and an X11R6 directory and radiance just gets confused with the two directories. problem solved on my end. regards Joven ______________________________________________________________________ Jose (Joven) Ignacio The Institute of Energy & Sustainable Dev't. The Gateway - DeMontfort University, Leicester - LE1 9BH UK Email: jbi@dmu.ac.uk Phone: +44 116 250-6124 FAX: +44 116 257-7449 URL: http://www.iesd.dmu.ac.uk/~jbi/ ______________________________________________________________________ INSTANCES AND OCONV From haldane@MIT.EDU Thu Jul 17 12:29:59 1997 To: greg@floyd Subject: some questions Date: Thu, 17 Jul 1997 15:28:29 EDT hi greg, i had some questions concerning radiance. the first question is concerning smoothing in radiance. i have a model of a person that i've created in "poser" on the macintosh and i'm trying to render it in radiance. the program exports dxf and i translate that into a rad file using radout. is there anyway to smooth the surfaces of the model? the second question is concerning the use of instances. i had a file that had 100 instances and it created an oct tree that was about 57mb. does that sound right? the oct file was simply a column with some cylindrical mapping. here's a sample of the instances.... ## row 1 ## -------------------------------------------------------------------- void instance bcolumns 5 Oct/bcolumns.oct -t 10.9530 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 13.1249 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 15.2968 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 17.4687 25.0780 0.000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 19.6406 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 21.8125 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 23.9844 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 26.1563 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 28.3282 25.0780 0.0000 0 0 void instance bcolumns 5 Oct/bcolumns.oct -t 30.5001 25.0780 0.0000 0 0 ## row 2 ## -------------------------------------------------------------------- void instance bcolumns 5 Oct/bcolumns.oct -t 10.9530 27.2499 0.0000 0 0 etc... am i doing it correctly? there's also one other very peculiar aspect to this whole thing. if i use 11 instances, the final octree gets generated almost instantly. if use 12 instances, the octree takes up to 5 minutes to generate. do you know what might be the cause for this? and that's it. thanx for all your time and assistance. haldane ------- From greg Thu Jul 17 13:23:37 1997 Date: Thu, 17 Jul 1997 13:23:37 -0700 To: haldane@MIT.EDU Subject: Re: some questions Hi Haldane, Unfortunately, there's no automatic smoothing method in Radiance. If you have surface points on a rectangular grid, you can give them to the gensurf program and have it smooth them using the -s option, or if you have the surface normals, you can pass them through tmesh2rad or mgf2rad to get smoothed triangles out, but those are the only options. Instances are a bit strange the way they work in oconv. Since they are volumes as opposed to surfaces, oconv cannot separate them very well, and you can get "clumpings" that take it a long time to sort out. Keep in mind that the extent of an octree is actually a cube, so your columns do not fit well into an instance, as all their dimensions will equal their longest one. If you can find a way to group roughly cubical geometry together for your instances, you combined octree will be much smaller and will generate much more quickly. -Greg --------- From radiance Fri Jul 18 10:04:51 1997 Date: Fri, 18 Jul 1997 10:04:50 -0700 From: radiance (Radiance Account) To: haldane@MIT.EDU Subject: Re: some questions Cc: radiance, greg Hi Haldane, I would like to add a few words regarding the use of instances and octrees. The "clumping" Greg mentions is the cause of your problems with oconv...why it is taking so long, and also possibly the reason why the octree is so large. The reason lies with the cubical form of the bounding boxes, as Greg mentions. Another way to get around this is to tell oconv to not bother with trying to sort out the instances' bounding boxes from each other. This is done by specifying a larger set size on the command line. Since you have twelve "surfaces", i.e., instances in your scene, you can use: oconv -n 12 columns.rad > columns.oct I would recommend that you then create another instance out of this gathering of columns before you include it into the main scene octree. If done in this way, there will only be a very small impact on rendering speed because only one voxel of the entire scene has more than 5 (the optimum) sub-voxels. Unfortunately, I have no suggestions for how to smooth the surfaces of your columns in the DXF file, unless you have a means of creating instead a Wavefront ".obj" file with vertex normals included for smoothing. Using obj2rad -s will cause the resulting radiance file to have surfaces smoothed. There's no way you're going get smoothing from DXF. -Chas ----------------------------------------------- STAR PATTERNS Subject: Star patterns in pfilt Date: Wed, 16 Jul 97 10:51:17 -0500 From: To: Does anyone have any advice in making the star pattern options (-n, -s, -h) work noticeably with pfilt in Radiance? I have not been able to hit on any combinations that show visible effects. Thanks for any help, robert@audile.com -------------- Date: Fri, 18 Jul 1997 10:54:37 -0500 (EST) From: "robert a. shakespeare" To: robert@audile.com cc: raydisc Subject: Re: Star patterns in pfilt Hi Robert, Try something like the following to achieve subtle star effects. Of course, it is dependent on luminance of objects within the scene. %pfilt -2 -h 5 -n 6 -s .00001 input.pic > output.pic You must use the -2 option instructing pfilt to make two passes over the picture data. Change the -h value to control the luminance level at which the star pattern will begin to be exhibited. Set the number of points with the -n value. The -s value sets the "spread" of the effect, ie, the value it has at the edges of the picture. The default is .0001 Hope this helps! -Rob Shakespeare Theatre Computer Visualization Center Indiana University +---------------------------------------------------------------------------+ | PLEASE AVOID THE EVIL REPLY COMMAND, AND ADDRESS YOUR E-MAIL EXPLICITYLY! | +---------------------------------------------------------------------------+ | Radiance Discussion list (unmoderated): radiance-discuss@radsite.lbl.gov | |*Radiance Digest mailing list (moderated): radiance@radsite.lbl.gov | | Radiance Modeling list (unmoderated): radiance-model@radsite.lbl.gov | | Requests to subscribe/cancel: radiance-request@radsite.lbl.gov | | Archives available from: ftp://radsite.lbl.gov/rad/pub/digest | | Radiance world-wide web site: http://radsite.lbl.gov/radiance/ | +---------------------------------------------------------------------------+ Radiance Digest Volume 3 Number 2 August 25, 1997 The latest version of the Radiance software is now: Radiance 3.1.3 A patch is available for v3.1 at: ftp://radsite.lbl.gov/pub/patch To streamline operations, a separate digest of the "discussion" list will no longer be maintained. As of this digest, content will be gleaned from both direct contact e-mail and discussion list postings. This may result in a slight increase in the frequency of postings to the digest list. Your comments and suggestions are (always) welcome. ----------------------------------------------------- Topics: CONVERTING TO 8-BIT COLOR DEPTH RUNNING OUT OF RAM WITH RVIEW PCOND -H VERSUS XIMAGE -E HUMAN RADIANCE ON MACHTEN (POWERMAC UNIX) OPTIMIZING OCONV (DENSE OCTREES) HINTS FOR BEGINNERS PERSPECTIVE AND PARALELL VIEW CORRESPONDENCE PROBLEM WITH PENUMBRAS RPIECE AND PARALELL PROCESSING GROUND PLANE ILLUMINANCE (AND AN ERRATA) PCOND PROBLEMS (ERRATA AND PATCH AVAILABLE) ----------------------------------------------------- CONVERTING TO 8-BIT COLOR DEPTH From joongnam@psych.nyu.edu Thu Jul 31 07:01:08 1997 Date: Thu, 31 Jul 1997 10:03:59 -0400 (EDT) From: Joongnam Yang Subject: Question on I/O on RADIANCE Hi, I've tried several times to ask questoins on RAIDANCE using the e-mail addresses listed in the package, to no avail. I've got this email address somewhere on the NET. If I may, I'd like to ask the following question. I've created an image using rpict and I want to import it to an existing experimental program by reading the binary bytes and reducing the number of colors in the image to only 256. The image that I created right now contains 311 colors. I've tried to use existing UNIX graphics converters (*ppm*), but I do not want to import the picture itself. I read the image as bytes, so that I can play around with it to change the number of colors. Do you know if there is a certain algorithm or rule to reduce the number of colors? The Unix converters produced 81 colors from 311 colors; so I do not want to use such a crude algorithm. Thanks in advance. Joongnam Yang NYU Vision ------ From gwlarson@radiate.engr.sgi.com Thu Jul 31 08:55:27 1997 From: gwlarson@radiate.engr.sgi.com (Greg Larson) To: Joongnam Yang Subject: Re: Question on I/O on RADIANCE Hi Joongnam, To answer your question, you may use either of the converters supplied with Radiance, ra_t8 or ra_pr to convert to 256 (or fewer) color images using the -c option. If you want to write your own color reduction (quantization) program, you can look at the code in ra_t8.c, clrtab.c and neuclrtab.c in the ray/src/px directory. The neural network color quantization scheme is particularly good at determining optimal colors, and you can get to it in ra_t8 with the -n option (use also -d to turn off dithering for best results). I hope this helps. In the future, you may send your questions to "radiance@radsite.lbl.gov". -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ RUNNING OUT OF RAM WITH RVIEW From mkli@netway.at Tue Aug 5 12:33:39 1997 From: Martin Klingler To: "'radiance@radsite.lbl.gov'" Subject: Problems with radiance and LINUX Date: Tue, 5 Aug 1997 21:29:23 +0200 Hi, I am Martin. I just have set up my LINUX-System to run RADIANCE. LINUX is completely = new to me but it looked fine. But I have the following problem. The daffodil example runs well. But if I try the office example rad = quits with an error. I have tried my own examples and found out that I = always get an error with higher resolutions or higher quality. If You = have any idea what this could be I would be very thankful if You can = give me a hint. Many thanks and best regards Martin ----- Martin, I need much more specific information in order to diagnose what might be going on with your Linux box and Radiance. Please write down each of your commands and the complete error message.=20 Even still, I can't guarantee that I'll answer. But if it is ok with you, I'll forward it to the Radiance discussion group for some help from the masses. -Chas ------- From mkli@netway.at Thu Aug 7 13:32:58 1997 From: Martin Klingler To: "'Radiance Account'" Subject: AW: Problems with radiance and LINUX Date: Thu, 7 Aug 1997 22:28:47 +0200 Chas, thank you for your answer. Here are the details. I am running LINUX from the newest S.u.S.E.-Distribution on a Cyrix = 6x86 P166 with 32 MB. After starting the Office example within X11 (rad -o x11 office.rif) the = picture is produced but then, usually in the 512 refining, the process = is cancelled. The office.err looks like: Process 468 on Martin started Wed Aug 6 20:29:59 CEST 1997 rad -v norm office.rif rpict -vu 0 1 0 -vp 8 36 -27 -vd -0.560723 -0.233635 0.794358 -vh 39.9 = -vv 27.5 -x 1024 -y 1024 -ps 6 -pt .08 -dp 512 -ar 18 -ms 1.3 -ds .3 -dt = .1 -dc .5 -dr 1 -sj .7 -st .1 -aw 1024 -aa .2 -ad 400 -as 64 -av 0.5 = 0.5 0.5 -lr 6 -lw .002 model.oct > office_norm.unf rad: error rendering view norm I have the feeling, that the error does not occur always on the same = position if I try the same a few times. Maybe you can help me with that. It is OK for me if you pass the problem = to the discussion group. Thanks a lot Martin ------ From radiance Thu Aug 7 15:32:38 1997 Date: Thu, 7 Aug 1997 15:32:38 -0700 From: radiance (Radiance Account) To: Martin Klingler Subject: Re: AW: Problems with radiance and LINUX Martin, If this is a very large size image (1000 pixels or greater) than the problem is that you are running out of RAM. Rview is not intended for final renderings because it keeps a copy of the pixel values in a native 32 bit radiance format in RAM. If you make the rview windows smaller and rview goes to completion without an error, then this is definately the problem. For final renderings, edit the rif file to reflect the desired quality and resolution, and execute: rad office.rif Look at the man pages for rad to learn about the other options for controlling accuracy, quality, etc. -Chas ------ From mkli@netway.at Sun Aug 10 12:27:56 1997 From: Martin Klingler To: "'Radiance Account'" Subject: AW: AW: Problems with radiance and LINUX Date: Sun, 10 Aug 1997 21:23:52 +0200 Hi Chas, thanks a lot for your answer. You are right RAM seems to be the problem. = I have played around a little bit. I am not very clear about what = happens with the RAM. Is there somewhere information about how much RAM = is needed for what kind of models?=20 Is there also a restriction in resolution, quality ... if I use the = recommended rpict? Thanks Martin ------- From radiance Sun Aug 10 22:53:46 1997 Date: Sun, 10 Aug 1997 22:53:46 -0700 From: radiance (Radiance Account) To: Martin Klingler Subject: Re: AW: AW: Problems with radiance and LINUX Rpict is not limited by RAM as rview is because rpict does not store the final image in RAM. It processes the image line-by-line and send the output to a file. I highly, strongly reccommend (insist) that you read the Radiance digests for more information. A keyword search will 'net' much valuable information. -Chas --------------------------------------------------------------- PCOND -H VERSUS XIMAGE -E HUMAN From jedev@visarc.com Mon Aug 4 09:32:31 1997 Date: Mon, 4 Aug 1997 12:27:59 -0400 (EDT) To: radiance From: jedev@visarc.com (John E. de Valpine) Subject: pcond -h vs ximage -e human Hi Chas: We are trying to get "pcond" to produce results similar to those from "ximage -e human." The problem is that in some cases we get results that are distinctly different. We are using "pcond -h" on the images. In some cases the results are quite similar to those from "ximage -e human" in other cases they are distincly different. The images are all based on the same scene and have been filtered in to the same exposure with "pfilt." Using "ximage -e human" on the set of images produces contrasts levels that appear consistent from image to image, whereas using "pcond -h" on the images produce some images that have contrast levels distinctly different from the others. What are we missing? -Jack de Valpine From greg@pink Sat Aug 9 22:00:50 1997 Date: Sat, 9 Aug 1997 21:57:57 -0700 From: greg@pink (Gregory W. Larson) To: jedev@visarc.com, radiance (Radiance Account) Subject: Re: PCOND ... Hi Jack (and Chas), Ximage -e human is roughly equivalent to pcond -c -s, not pcond -h. The -h option includes the -v and -a options, which ximage does not reproduce for efficiency (time) reasons. Also, ximage is not smart about foveal adaptation areas, so under certain circumstances, the histogram results can be (significantly) different, and the tone mapping therefore will also be different. I have noticed myself that pcond sometimes does a WORSE job than ximage in the darkest regions of an image, and I have tried to fix this somewhat in the latest release. I didn't see you at Siggraph -- were you there? I assume by the date on your e-mail that you either went late or had some deadlines or something and didn't go at all. -Greg ---------------------------------------------------------------------- From matthews@artifice.com Fri Jul 18 12:57:23 1997 Subject: Radiance on Power Mac Date: Fri, 18 Jul 97 12:54:48 -0700 From: Kevin Matthews To: "Radiance Discussion List" Hello, For anyone out there interested in installing Radiance on Power Macintosh, we've updated the instructions provided at the Artifice web site, at: I think you'll find these updated step-by-step instructions are pretty complete and reliable. Any feedback would be appreciated. In related news, we've also posted a complete Radiance materials substitution library for DesignWorkshop users, so tiling and properties materials assigned in DesignWorkshop and previewed with QuickDraw 3D rendering can now be automatically replaced with matching Radiance textures. You might also find this overall set of about 100 useful and pre-scaled architectural materials to be useful in its own right. It's available in the DesignWorkshop owners-only area at . You can see example results of using this library, as applied to a simple DW sample model, on our message board at: (The Radiance treatment of the copper hood over the fireplace gives an especially dramatic improvement over the QD3D version.) Onward and upward, Kevin Matthews + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Artifice, Inc. 800.203.TECH http://www.artifice.com new tools and media for environment designers 541.345.7421 voice - 541.345.7438 fax - PO Box 1588, Eugene, OR 97440 + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + OPTIMIZING OCONV (DENSE OCTREES) From mlarch@ix.netcom.com Thu Aug 7 15:22:33 1997 Date: Thu, 07 Aug 1997 18:17:54 -0400 From: Morgan Larch To: radiance Subject: "oconv internal - set overflow in addobject" and MAXSET Hello all I have been working on a tiles and ran into the mentioned error message from oconv. Looking into the code I see that oconv will indeed fail if MAXSET is exceeded. Looking into object.h I found that MAXSET is defined as 127. Is there any problem with raising this to, say, 5120, or so ? Is there a better way to get around this? Is there a reason why it is set (low) ? Thanks much, mlarch@ix.netcom.com ----- From radiance Fri Aug 8 10:53:56 1997 Date: Fri, 8 Aug 1997 10:53:56 -0700 From: radiance (Radiance Account) To: Morgan Larch Subject: Re: "oconv internal - set overflow in addobject" and MAXSET Cc: radiance Morgan, You'll have to wait for Greg to come back from a vacation to answer this questions. I have my limitations, as you are learning, about what I can confidently answer. Guessing: If MAXSET controls the maximum number of objects per boundinb box, then you don't want this number to be very high or the ray tracing calculation will start to slow down a whole lot. You could always try it and see. I suspect that if you are creating an octree of just one column and still having a problem with set overflow in addobject, that you'll need to break the column down even further. I once modeled a corinthain column capital and had to model each quadrant of the capital in a separate octree and use mirroring to assemble the entire capital. These four pieces were then made into another instance and added to the rest of the column components. Good luck! -Chas -------- From mlarch@ix.netcom.com Sat Aug 9 10:46:23 1997 Date: Sat, 09 Aug 1997 13:39:55 -0400 From: Morgan Larch To: radiance Subject: fallowup, oconv and set overflow Thanks Chas for getting back to me. I read through the digests again for other dissucions about this. Greg has a lot to say there. What I did instead of playing with MAXSET (in object.h) was to re-work my tiles algorithm. Because the tiles are short, I can exclude all non-visible polygons from the rad file (based on camera position). That coupled with ramping up the octree resolution to 20000 seems to have fixed the problem. The one question I have though is is that TOO high an octree resolution and should I be trying to reduce this more? By too far I mean out side of oconv's the normal operational range. Or does it not really matter -- a `whatever works works'. Thanks mlarch@ix.netcom ------ From radiance Sun Aug 10 22:48:18 1997 Date: Sun, 10 Aug 1997 22:48:18 -0700 From: radiance (Radiance Account) To: Morgan Larch Subject: Re: fallowup, oconv and set overflow Cc: radiance I think an octree resolution of around 5000 is not too uncommon. I also don't think a resolution of 20000 is unreasonable either. I've used 5000 with terrrain data quite successfully. Terrain data is a good example where the specific "granularity" of the geometry affects how to optimize the oconv process. A terrain field is made up of a mesh of triangles. What size should the "-n objlim" be? Well, how many triangles fit conveniently into a cube that is greater than four (the lower limit of speed optomization for ray-tracing) but not much greater than 5 (the optimum value as per the man page)? Answer: 16. .------.------.------.------. | /|\ | /|\ | | / | \ | / | \ | | / | \ | / | \ | | / | \ | / | \ | | / | \ | / | \ | |/ | \|/ | \| .------.------.------.------. | /|\ | /|\ | | / | \ | / | \ | | / | \ | / | \ | | / | \ | / | \ | | / | \ | / | \ | |/ | \|/ | \| .------.------.------.------. The more conveniently the geometry fits into the basic shape of the voxel (a cube), the fewer voxels and the fewer sub-voxels needed to complete the octree process. An "objlim" of four would also help because oconv will not fruitlessly attempt to find a fifth object to squeeze into each and every voxel, but this does not reduce the number of voxels or sub- voxels needed to get the job done as compared with the default value of five. Did I already mention that one can save a lot of voxel space by creating the octree *before* applying any rotation to the geometry? In the case of terrain data, this is especially important. If your geometry was originally axis-aligned when you created it, this is the best time to create an octree of it for use as an instance. Lastly, you're using the -f option of oconv, right? This reduces the time required to load in the first occurrance of each "instance" entity. The downside is that the octree has to be re-created if the materials change. Please share with me some renderings of your model when you are satisfied with your model. -Chas -------------------------------- HINTS FOR BEGINNERS #1 From tw45070@vub.ac.be Tue Aug 5 02:28:41 1997 Date: Tue, 05 Aug 1997 11:26:47 +0200 To: gregl@asd.sgi.com From: Alick =?iso-8859-1?Q?Geren=E9?= Subject: Questions concerning Radiance I'm a student in Applied Sciences, Department Architecture, from the Free University of Brussels, Belgium. I've mailed you behofe but now I have some rather specific questions I'm working on a daylight simulation using the Radiance program on a HP station. Since I'm relatively new to the program and UNIX, I'm encountering some problems, I have gone through to manual and the tutorials but still can't solve them. Here they are : 1/ The scene I'm simulating is the interior of an old storehouse (about 100m x 30m) which use a central skylight to get light in (a strip running along the longest axis). The roof is supported by colums and girders (made up of I-beams > many faces !!). It's made up of simple materials: floor : concrete walls : brick (painted whitish) roof : wooden panels colums/girders : steel I'm having trouble getting the material definition right (texture and roughness ??), especially the steel and the brick. Also when I do a simulation using the interreflection calculation the floor seems to go white instead of looking like concrete. Probably because I do something wrong with the definitions. Could you please give me some suggestions concerning this problem ? 2/ Since the only light is coming from the skylight, the interior is mostly illuminated by reflection on the floor.=20 To get the light right I'm using the gensky command to simulate the CIE skies. Now, to do the interior do I use illum for the glass of the skylight or for the floor ? Also do I define it with the material or can I use the mkillum option in rad to turn to floor into illum? How do I get a good image, the ones I seem to get don't look very real to= me? I hope I didn't put in to many questions at a time and I hope you will be able to help me out. Many thanks, Alick Geren=E9 tw45070@vub.ac.be --------- >From greg Sat Aug 9 22:22:30 1997 Date: Sat, 9 Aug 1997 22:22:29 -0700 From: greg (Gregory W. Larson) To: Alick Subject: Re: Questions concerning Radiance Hi Alick, I'm going to take a really quick stab at this as I have a week's worth of mail to catch up with, and hope that Chas fills in with some details. Materials: floor : concrete walls : brick (painted whitish) roof : wooden panels colums/girders : steel Here's some guesses to start with: void plastic concrete 0 0 5 .25 .23 .2 0 0 void plastic white_painted_brick 0 0 5 .4 .4 .4 .02 .1 void plastic wood_panel 0 0 5 .5 .2 .08 .01 .15 void metal steel 0 0 5 .3 .2 .1 .1 .1 I'm assuming in the above that your steel beams are pretty dirty, since it's a warehouse. My best advice is to use an illum type for the skylights, with the skyfunc distribution as its modifier, i.e.: void glass sky_glass 0 0 3 .6 .6 .6 skyfunc illum sky_window 1 sky_glass 0 3 .55 .55 .55 Here again, I'm assuming that your windows are a bit dirty, which is very likely for skylights. To generate your results, you don't have to run mkillum on your floor, but you should set the INDIRECT variable in rad to 2 and VARIABILITY to High. This will make your renderings take longer, but this is needed to properly integrate the contribution from the floor. If you aren't using rad or trad, DO!! Good luck! -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Alick, The only thing I'd add to Greg's reply is... Have you defined your "ZONE=" varriable in your .rif file? Assuming you are using RAD at all, the default is that your are rendering an "Exterior" space, ie: ZONE= E .... This causes the "ambient value" to be set much too high for interior renderings. Don't use mkillum unless you have to. The Ambient calculation works well and you avoid other potential pitfalls of the simplifications implied with mkillum. Did I mention that ambient calculation parameters have been talked about at length in the Radiance digests? You can find the archives on the Radiance ftp site: ftp://radsite.lbl.gov/radiance/rad/pub/digest ftp://radsite.lbl.gov/radiance/rad/pub/discuss -Chas +---------------------------------------------------------------------------+ | Charles Ehrlich Phone: (510) 486-7916 | | Principal Research Associate Fax: (510) 486-4089 | | UC Lawrence Berkeley National Laboratory Contact person for: | | 1 Cyclotron Road MS: 90-3111, Berkeley, CA 94720 RADIANCE and ADELINE | +---------------------------------------------------------------------------+ ------------------------------------------------------------- PERSPECTIVE AND PARALELL VIEW CORRESPONDENCE From nhsadika@barrow.uwaterloo.ca Fri Aug 15 22:05:11 1997 Date: Sat, 16 Aug 1997 01:03:24 -0400 (EDT) From: Navid Sadikali To: "Gregory J. Ward" Subject: Radiance question: Perspective and Parallel corresponence Hi. I have a simple scene with a sphere in the left center and a box to the right of the sphere. My question is this: If I take a perspective view with a 45 degree view angle (both h&v), what size parallel view (in world coordinates) would I use so the images of the sphere would appear of similar size in both perspective and parallel views? Do I need to calculate the distance to the sphere and figure out the width of a projection plane at that point? (* sphere) |--------| is this the width of the corresponding parallel view I need? \ / \ * / <-- width of plane at intersection |--------| \ / \ 45d/ \ / \/ perspective view ----- From greg Sun Aug 17 11:42:46 1997 Date: Sun, 17 Aug 1997 11:42:45 -0700 From: greg (Gregory W. Larson) To: Navid Sadikali Subject: Re: Radiance question: Perspective and Parallel corresponence Hi Navid, The short answer to your question is "yes." You need to compute the width of a projection plane at the distance of your viewpoint. For a 45 degree view, the width of the projection plane is 2*Distance*tan(45/2), where tangent computes from degrees. Hope this helps. -Greg -------------------------------------------------------- PROBLEM WITH PENUMBRAS From mlarch@ix.netcom.com Fri Aug 15 07:44:43 1997 Date: Fri, 15 Aug 1997 10:40:08 -0400 From: Morgan Larch To: radiance Subject: anti-aliasing, penumbera Hi all, I'm having a hard time getting familiar with all the rendering command line switches ( a pentium 133 is not the best platform on which to run multiple trial and error sessions), so I have put up a simple image: http://pw1.netcom.com/~mlarch/house/index.html and would really appreciate anyone describing how which swtiches would help where in cleaning up the image. The image shows best what I am talking about, and whatever advice you might have will be welcomed. thanks, mlarch@ix.netcom.com From radiance Fri Aug 22 13:21:33 1997 Date: Fri, 22 Aug 1997 13:21:33 -0700 From: radiance (Radiance Account) To: mlarch@ix.netcom.com Subject: penumbras #2 Morgan, I finally saw your comments at the bottom of the page that contains the big image. You are using RAD and you have PEN= T. Good. Here's my next question. I notice that the shadows all seem to be diverging. Do you mean to be simulating light from the sun? If so, then you should be using gensky. This creates an infinately distant "source" type with the proper brightness and subtended angle, and also creates the diffuse skylight component. The shadows will also all be paralell. See the gensky man page for more information. Using an infinate source may not eliminate the banding effect. Next clue. Rad automatically does a pfilt on the intermediate image, usually called .unf, before coming up with the .pic file. When you subsequently did a pfilt, what options to pfilt did you use? Did you use "-r 1"? This enables gaussian filtering instead of the default box filtering. This will definately reduce the banding effect. You can also put this pfilt command line option in the .rif file like: pfilt= -r 1 And if you need the image to be reduced in size again, then be sure to use -r 1 on the command line. That should take care of the problem. If not, let me know and I'll open the discussion up to the discussion list. -Chas ---------- From mlarch@ix.netcom.com Sun Aug 24 07:13:39 1997 Date: Sun, 24 Aug 1997 10:11:03 -0400 From: Morgan Larch To: radiance-discuss Subject: Re: penumbras #2 Thanks *again* Chas for getting back, hope the vacation is a Vacation. -cke The shadows are diverging here because of a local light source not to far above the roof. Sky light is a little farther down the road yet, after I workout the skylight filtering. Working with pfilt as you describe helps, yes. But what I ended up doing was recompiling the radiance package with the -DMC switch (Monte Carlo). This had a positive effect. I updated the web page: http://pw1.netcom.com/~mlarch/house/index.html there is a little more discription there. -------- -cke From mlarch@ix.netcom.com Mon Aug 25 03:01:17 1997 Date: Mon, 25 Aug 1997 05:58:39 -0400 From: Morgan Larch To: radiance-discuss Subject: Penumbras and -DMC I'm not sure just where this file came from, but it was with a lot of other html file buried in a ../Notes/ directory. Because it is short, I went ahead and included it below. mkarch@ix.netcom.com >SNIP< RADIANCE Compile Switches Here is a list of compile switches, used to customize Radiance code for specific machines and users: -DMC If set, switches from default low-discrepency sequence sampling to true (pseudorandom) Monte Carlo. Use if the "brushed" appearance of specular highlights and penumbras bothers you. RPIECE AND PARALELL PROCESSING From pedelty@ltpmail.gsfc.nasa.gov Wed Aug 13 08:58:51 1997 From: Jeff Pedelty Subject: rpiece question To: radiance-discuss Date: Wed, 13 Aug 1997 11:54:15 -0400 (EDT) Radiance users: We're trying to gear up to do large renderings on a Cray T3E here at NASA Goddard. It appears that the NFS file locking mechanism works on the T3E, and so rpiece seems to work 'out of the box'. However, we will evaluate the performance on the T3E, and consider implementing an MPI version if the NFS overhead seems excessive. Before we get to that point, however, we want to understand the current behavior of rpiece. To simplify things, we are running it on just a single workstation. We find that we do not get the same .pic file when we ask rpiece to render the scene in a single chunk (i.e. -X 1 -Y 1) or when we break it into 4 pieces (i.e. -X 2 -Y 2). We are seeing the same results whether we run on an SGI Indy, a Sun Sparcstation, or on a single processor of the T3E. The rendered images are qualitatively the same in appearance, and have the same min and max values, but many of the pixels have very different values when rendered in the two different ways. My expectation is that a parallel rendering should give very nearly the same output as a serial one, but perhaps I'm not understanding something fundamental about the way the way rpiece/rpict work. However, I've not seen any earlier discussion on this topic. Any enlightenment would be most appreciated. I can send anyone my scripts, input files, results, etc., if appropriate. Thanks! Jeff Pedelty ---- Jeffrey Pedelty | jeffrey.pedelty@gsfc.nasa.gov Biospheric Sciences Branch | Phone: 301-286-3065 Code 923 | Fax: 301-286-0239 NASA's Goddard Space Flight Center | Greenbelt, MD 20771 USA | Heart, hearth, and earth --------- Jeff, It sounds like you have made rapid progress since your last e-mail message! Joe Klems and I have been talking about the T3E and wondering if something like MPI is going to be necessary. It sounds like it won't, but we are also holding final judgement until we solve the entire problem of making a real-time renderer out of Radiance. Regarding the comparison of the output between rpiece and rpict. Rpiece saves its images as non-run-length-encoded image files, the "old" behavior of Radiance back before version 2.3. So if you're doing a binary comparison of the file bits and/or file sizes, there will not be a 1 to 1 match. To convert the output from rpiece to the standard RLE version, just pass the image through pfilt. The last switch you need to toggle to make sure that you can compare any two images on a pixel-by-pixel basis is the rpict/rpiece "-pj 0" option. As per the man page, this turns off pixel jittering, one of Radiance's anti-aliasing features. When using -pj > 0, every radiance image of the same scene will be somewhat different each time it is rendered. With -pj = 0, it samples pixel centers only, eliminating the randomness in where the pixel will actually fall. If then, by using pvalue, you come up with very different pixel values and/or file sizes, we might have an internal logic problem with the T3E compiled version. Also, be sure not to miss "ranimate". It is a very powerful tool for managing the rendering of animation sequences. It deals with tape device(s) for storing image files, knows about file storage space limits and maintains them, figures out which frames to render and which frames to "tween" with pinterp, knows about multiple CPU's, and many more features. Unfortunately, it doesn't work with rpiece. If you don't have a way of creating animation paths, I reccommend Peter Apian-Benowitz's utility which is linked to the main Radiance WWW site. Also note the latest bug report/patch. Be sure you have the very latest version of Radiance (3.1.2) because a long-standing but in rpiece was fixed. This minor bug disabled the use of the "-pa 0" option for rendering views without regard to pixel squareness. I was having some file-locking problems with Linux until I compiled into the Linux kernal a special driver for "mandatory file locking". It is a package by Andy Walker last updated April 15, 1996. Also, I'd be happy to create a repository on the radsite for parallel processing scripts, codes, etc. Perhaps this is the beginnings of a Radiance Parallel rendering User Group? Ciao, -Chas Charles Ehrlich +--------------------------------------------------+-----------------------+ | Charles Ehrlich | Phone: (510) 486-7916 | | Principal Research Associate | Fax: (510) 486-4089 | | UC Lawrence Berkeley National Laboratory | Contact person for: | | 1 Cyclotron Road MS: 90-3111, Berkeley, CA 94720 | RADIANCE and ADELINE | +--------------------------------------------------+-----------------------+ From greg@pink Fri Aug 22 15:02:35 1997 Date: Fri, 22 Aug 1997 14:59:44 -0700 From: greg@pink (Gregory W. Larson) To: radiance (Radiance Account) Subject: Re: rpiece question The reason you don't get the same result when you break up the image differently is because each chunk gets sent to a separate rpict process, which uses some logic to determine sampling parameters on the pixels. The sampling parameters for four separate chunks will always be different than those for one big chunk. These sampling parameters then go on to feed the various stochastic processes during rendering, which results in variance at each pixel. The best you can hope for is the same overall average. In short, your pixels may vary. The same effect is sometimes evident between identical renderings on different machines, which may have different implementations of the library pseudorandom number generator. If you want to arrive at identical images, you can use the following settings to turn all stochastic sampling off: -pj 0 -dj 0 -dp 0 -sj 0 -ab 0 You won't get indirect contributions or proper specular highlights, but you should get reasonably consistent pixels from one rendering to the next. The only other thing that occurs to me is that when you break your image into chunks, you must make sure that the horizontal and vertical resolutions are exact multiples of the specified divisors. If they are not, then slight registration differences in the pixels will cause different rays to be sample depending on how the image is broken up. To avoid this problem, use a square view (i.e., -vh and -vv the same) and use multiples of two for your divisors (-X and -Y parameters). I hope this helps. -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From darkwing@bsdd.regensburg.com Fri Aug 22 15:50:14 1997 To: Jeff Pedelty Subject: Re: rpiece question Date: Fri, 22 Aug 1997 15:45:26 -0700 From: Christoph Hoegl I'm probably not the right one to answer this to full extent but i ran across the same problem some time ago and try to explain the magic behind the parallel usage of Radiance > We're trying to gear up to do large renderings on a Cray T3E here at > NASA Goddard. It appears that the NFS file locking mechanism works on > the T3E, and so rpiece seems to work 'out of the box'. However, we > will evaluate the performance on the T3E, and consider implementing an > MPI version if the NFS overhead seems excessive. MPI is more or less useless (s/MGF/PVM if you like:-) ) I tried it already but at best it got about 5% faster (average) The simpler and more effective way is to optimize the the size and shape of the slices to reduce empty loops. and hand it to some scheduler (NQS). Always keep in mind that communication (locking, data turnaround, ...) kill effectiveness of parallelism. Best is to do this by giving each CPU as big slices as possible and spend some time on cutting more equal slices (in the sense of CPUcycles). (I do this by cutting small at very consuming regions (glass/metal/...) and enlarging "boring" regions in the background (you may find the emboss function of programs like the GIMP!!! ( http://www.xcf.berkeley.edu/~gimp/ ) and its scheme interface (to do this cutting automatic) very convenient) > > Before we get to that point, however, we want to understand the > current behavior of rpiece. To simplify things, we are running it on > just a single workstation. We find that we do not get the same .pic > file when we ask rpiece to render the scene in a single chunk (i.e. -X > 1 -Y 1) or when we break it into 4 pieces (i.e. -X 2 -Y 2). We are > seeing the same results whether we run on an SGI Indy, a Sun > Sparcstation, or on a single processor of the T3E. > Be sure to disable all "speed" optimizing flags to r* as they may cause loss of rays in the calculation. Test your setup with a simple scene light at 0 0 0 view from 10 0 0 vd 1 0 0 prism located with main axis equal to vd at 10+ 0 0 render the whole thing with one view and render a quarter (upper left) and compare the parts they must be identical or you lost rays due to some wrong or unset flags to the renderer! > The rendered images are qualitatively the same in appearance, and have > the same min and max values, but many of the pixels have very > different values when rendered in the two different ways. My > expectation is that a parallel rendering should give very nearly the > same output as a serial one, but perhaps I'm not understanding > something fundamental about the way the way rpiece/rpict work. > However, I've not seen any earlier discussion on this topic. > > Any enlightenment would be most appreciated. I can send anyone my > scripts, input files, results, etc., if appropriate. > Make them available by FTP or HTTP and i will peek at it. (best if tared and gzipped) regards, Christoph -- Christoph Hoegl / darkwing@bsdd.regensburg.com / (Darkwing@berkeley.edu) Siedlungsstr. 18 93128 Regenstauf Germany Fax:+49 940270611 12a Tellerrd. 93720 Berkeley CA USA Fax:+1 (510) 642 1043 => OOOOOOOOOOOOOO = Now in the "TUNNEL IN NO TIME"-team = OOOOOOOOOOOOOOO => ----------- From tcvc@falstaff.ucs.indiana.edu Fri Aug 22 21:43:57 1997 Date: Fri, 22 Aug 1997 23:39:17 -0500 (EST) From: "robert a. shakespeare" To: Radiance Account Subject: Re: rpiece question I am just beginning to accumulate images rendered on a 64 processor SGI Origin 2000 using rpieces. In a week or so I might be able to contribute to this discussion with examples which have also been run on a single workstation, a 4 processor box and the supercomputer.. so far I cannot find differences between the results. Perhaps pcomb can show what the eye might not perceive... more later. Rob Shakespeare Theatre Computer Visualization Center Indiana University Bloomington, IN 47405 shakespe@indiana.edu, tcvc@indiana.edu ------------------------------------------------------------ GROUND PLANE ILLUMINANCE (AND AN ERRATA) From chris_hillsdon@hotmail.com Wed Aug 6 11:15:41 1997 From: "Chris Hillsdon" To: radiance Subject: ground plane illuminance Date: Wed, 06 Aug 1997 11:11:05 PDT Howdy Charles, I have been using RADIANCE for a short while modelling daylit office spaces (light shelves etc). I have some relatively simple questions for you (I hope) regarding gensky and calculating ground illuminances. Good to see that the book "Rendering with Radiance" is finally finished, I await its publication. I have waded through almost all the digests that I have (V1N1 .. V2N10) but still don’t quite understand some things and want to make sure I’m on the right track. I have a sky desciption as follows : # gensky 7 15 12 -c -a 51.7 -o 0 -m 0 # Solar altitude and azimuth: 59.9 -2.6 # Ground ambient level: 29.0 void brightfunc skyfunc 2 skybr skybright.cal 0 3 2 3.73e+001 5.80e+000 # my additions for sky and ground skyfunc glow sky_glow 0 0 4 0.986 0.986 1.205 0 sky_glow source sky 0 0 4 0 0 1 180 skyfunc glow ground_glow 0 0 4 1.6 0.8 .25 0 ground_glow source ground 0 0 4 0 0 -1 180 This sky is supposed to describe a CIE overcast sky (I’m pretty sure this is correct, the glow materials are calculated using l = 0.265*r + 0.670*g + 0.065*b so the luminosity is unaffected - digest V2N10) My questions are as follows : [1] Skyfunc describes the CIE overcast sky (varying to 1/3 zenith brightness at horizon). But what are the values passed to skyfunc on the last line : void brightfunc skyfunc 2 skybr skybright.cal 0 ** 3 2 3.73e+001 5.80e+000 ** Is this the zenith brightness? (I know this has been asked before but I couldn’t find an answer - V2N5). [2] The "Ground ambient level" as I understand it is irradiance/PI. So in order to calculate the external ground illuminance level it would be : "ground ambient level" * PI * 179 I am confused about this because there seems to be conflicting advice in the digests. Earlier digests suggest as above but the later digests (eg V2N9 p26 of 35) show the calc to be : "ground ambient level" / PI * 179 [3] With an overcast sky is the above the only contribution to horizontal ground illuminance (ie used for daylight factor calcs). [4] Where does the value of PI originate in the calculation. Is it because the projected area of the sky hemisphere is PI, as defined for source in the sky model. A hemisphere would have 2*PI steradians? [5] Are then the units for "ground ambient level" the watts/m^2. [6] As the ground is described as a glow and there is no ground plane (as such) then the ratio of biggest to smallest objects within the model should be unaffected by the ground. Sorry to bother you with such trivial questions but having looked through all my sources of information I still remain unsure. Any help you can provide in answering these questions would be greatly appreciated. Actually one final generic daylighting question. As I mentioned earlier I am trying to use RADIANCE for simulating office spaces with light shelves. I think I remember reading somewhere mkillum is not appropriate for daylighting where the daylight is penetrating the space, is this correct as I cannot find where I read this. Also should -av remain at the defaults for an interior space of -av 0.01 0.01 0.01 provided the -ab is set to a sensible level, (say -ab 4). Thanks alot Chris Hillsdon ______________________________________________________ Get Your Private, Free Email at http://www.hotmail.com From greg@pink Fri Aug 22 15:21:18 1997 Date: Fri, 22 Aug 1997 15:18:28 -0700 From: greg@pink (Gregory W. Larson) To: radiance-discuss, chris_hillsdon@hotmail.com, radiance (Radiance Account) Subject: Re: ground plane illuminance > From: "Chris Hillsdon" > To: radiance > Subject: ground plane illuminance > Date: Wed, 06 Aug 1997 11:11:05 PDT > > My questions are as follows : > > [1] Skyfunc describes the CIE overcast sky (varying to 1/3 zenith > brightness at horizon). But what are the values passed to skyfunc on the > last line : > > void brightfunc skyfunc > 2 skybr skybright.cal > 0 > ** 3 2 3.73e+001 5.80e+000 ** > > Is this the zenith brightness? (I know this has been asked before but I > couldn’t find an answer - V2N5). The answer may be found in the library function "skybright.cal" (In the src/gen directory in the distribution). Yes, the second real argument is the zenith brightness, in watts/sr/sq.meter. > > [2] The "Ground ambient level" as I understand it is irradiance/PI. So > in order to calculate the external ground illuminance level it would be: > > "ground ambient level" * PI * 179 > > I am confused about this because there seems to be conflicting advice in > the digests. Earlier digests suggest as above but the later digests (eg > V2N9 p26 of 35) show the calc to be : > > "ground ambient level" / PI * 179 I am embarrassed to say that this is a mistake in that particular digest. The ground ambient level should be multiplied, not divided by PI. Thanks for point it out. I just changed history by fixing my answer to that question.... > > [3] With an overcast sky is the above the only contribution to > horizontal ground illuminance (ie used for daylight factor calcs). Yes. In fact, daylight factor is not really defined for sunny skies. > > [4] Where does the value of PI originate in the calculation. Is it > because the projected area of the sky hemisphere is PI, as defined for > source in the sky model. A hemisphere would have 2*PI steradians? A hemisphere has 2*PI steradians, but only PI PROJECTED steradians. The difference is a cosine in the integral. PI is the correct factor -- just don't forget to multiply rather than divide. (Sheesh!) > > [5] Are then the units for "ground ambient level" the watts/m^2. Yes. > > [6] As the ground is described as a glow and there is no ground plane > (as such) then the ratio of biggest to smallest objects within the model > should be unaffected by the ground. Correct. > > Sorry to bother you with such trivial questions but having looked > through all my sources of information I still remain unsure. Any help > you can provide in answering these questions would be greatly > appreciated. > > Actually one final generic daylighting question. As I mentioned earlier > I am trying to use RADIANCE for simulating office spaces with light > shelves. I think I remember reading somewhere mkillum is not appropriate > for daylighting where the daylight is penetrating the space, is this > correct as I cannot find where I read this. Also should -av remain at > the defaults for an interior space of -av 0.01 0.01 0.01 provided the > -ab is set to a sensible level, (say -ab 4). I don't know where you may have read this, but it is not true. Mkillum works fine for spaces with penetrating daylight. Your value for the -av parameter should correspond to the light level in the space. The best trick I've found for setting it is to use 0.5/exposure_value, where exposure_value is the multiplier needed to get a good exposure for displaying the resulting picture. It is a slight chicken-and-egg problem in the sense that the ambient level affects the overall image brightness, but if you start with a too-small -av setting (and .01 is almost certainly too small unless your windows are tiny), then you can use this technique to arrive at a better value. > > Thanks alot > > Chris Hillsdon -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ------------ PCOND PROBLEMS (ERRATA AND PATCH AVAILABLE) From rjrc2@cus.cam.ac.uk Tue Aug 5 09:00:26 1997 Date: Tue, 5 Aug 1997 17:01:38 +0100 To: radiance@radsite From: rjrc2@cam.ac.uk (Raphael Compagnon) Subject: pcond man page Hello, I just installed the latest version of Radiance and had a close look to the new pcond program. I find it very useful and well documented (I also downloaded its associated paper). It seems that the man page for pcond contains little errors in the EXAMPLES section: I read: To prepare a picture to be sent to a film recorder destined eventually for a slide projector with a minimum and maximum screen luminance of 0.2 and 125 candelas/m^2, respectively: pcond -b 0.2 -t 125 final.pic > film.pic To do the same if the output colors of the standard image "ray/lib/lib/macbeth_spec.pic" have been measured: macbethcal -c mbfilm.xyY > film.cal pcond -b 0.2 -t 125 -f film.cal final.pic > film.pic To further tweak the exposure to bring out certain areas indicated by dragging the right mouse button over them in ximage: ximage -op -t 75 final.pic | pcond -i .5 -b 0.2 -t 125 -f film.cal final.pic > film.pic But I realised that the -b and -t options are not supported by the program! If I am right, they should be replaced in then above examples by: -u 125 -d 625 Best regards, Raphael Compagnon ------------- From rjrc2@cus.cam.ac.uk Tue Aug 5 09:54:04 1997 Date: Tue, 5 Aug 1997 17:55:13 +0100 To: radiance@radsite From: rjrc2@cam.ac.uk (Raphael Compagnon) Subject: pcond on hemispherical picture Hello again, I applied pcond -h+ on an hemispherical picture (-vth -vh 180 -vv 180) and I observed that the black frame that circles the original picture have been mapped to a white colour. In addition, the periphery of the picture appears with large square steps. Turning on the various options in turn, I realised that both these strange effects appear inside the veiling part of the calculation only (pcond -v+). What is going wrong ? Regards, R. Compagnon _____________________________________________________ Dr. Raphael Compagnon The Martin Centre for Architectural and Urban Studies University of Cambridge Department of Architecture 6 Chaucer Road, Cambridge CB2 2EB, England Tel: +44 1223 331719 Fax: +44 1223 331701 E-Mail: rjrc2@cam.ac.uk WWW: http://www.arct.cam.ac.uk/mc/index.html ---------------- From greg@pink Wed Aug 13 11:22:37 1997 Date: Wed, 13 Aug 1997 11:19:46 -0700 From: greg@pink (Gregory W. Larson) To: rjrc2@cam.ac.uk (Raphael Compagnon) Subject: Re: various Cc: radiance Hello Raphael, Good to hear from you! I just returned from Siggraph, and it's taking me a while to catch up with e-mails. I'm happy that you'll still be working with Radiance after your return. I'm glad that your tutorial course went well, also. Any notes you can provide would be very much appreciated. We are trying to assemble the book CD-ROM this month (duplicated on the web site), and course notes are a great thing to include, with your permission. As for "Applications of RADIANCE to Architecture and Lighting Design," this article was never properly published. It appeared in the 1994 IES conference proceedings, then again in the course notes from Ian Ashdown's 1996 course on Global Illumination in Architecture and Theater, but nowhere else. With some effort, I could put it on our web site, but I'm not sure the article has much value, aside from being an excuse to show off some nice Radiance images. Thank you for pointing out the inconsistencies in the pcond(1) man page. In fact, I noticed these myself, and corrected them after the 3.1 release. The other error with the hemispherical view is a divide-by-zero fault I hadn't run across since I hadn't tested this image type. Thanks for bringing this to my attention. I'll put together a patch, and include this with the revised man page. -Greg +---------------------------------------------------------------------------+ | PLEASE AVOID THE EVIL REPLY COMMAND, AND ADDRESS YOUR E-MAIL EXPLICITYLY! | +---------------------------------------------------------------------------+ | Radiance Discussion list (unmoderated): radiance-discuss@radsite.lbl.gov | |*Radiance Digest mailing list (moderated): radiance@radsite.lbl.gov | | Radiance Modeling list (unmoderated): radiance-model@radsite.lbl.gov | | Requests to subscribe/cancel: radiance-request@radsite.lbl.gov | | Archives available from: ftp://radsite.lbl.gov/rad/pub/digest | | Radiance world-wide web site: http://radsite.lbl.gov/radiance/ | +---------------------------------------------------------------------------+ Radiance Digest Volume 3 Number 3 October 10, 1997 The latest version of Radiance is 3R1P5. Archives can be found at: ftp://radsite.lbl.gov/pub/digest and ftp://radsite.lbl.gov/pub/discuss Index of Topics --------------------------------------------------------- LATEST PATCH FILES AVAILABLE AUTOCAD AND DXF TRANSLATORS RADIANCE FOR WINDOWS NT LIGHTS INSIDE BOXES WITH SMALL HOLES USING MKILLUM FOR SIDELIGHTED OFFICE LARGE AREA SOURCE GLARE CALCULATIONS PROCEDURAL MODELING WITH GENSURF AQUARIUM ANALYSIS MODELING WIRE MESH SPECULAR LIGHT SHELF 'EXPOSING' IMAGES FOR INDOOR AND OUTDOOR BRIGHTNESSES --------------------------------------------------------- LATEST PATCH FILES AVAILABLE From radiance Tue Sep 2 18:21:04 1997 Date: Tue, 2 Sep 1997 18:14:37 -0700 From: radiance (Radiance Account) To: raydisc Subject: New Radiance patch now available Radiance users, A new patch is now available at the Radiance WWW site: http://radsite.lbl.gov/rad/patch/ The changes include: 3.1.4 patch files: src/rt/ambient.c - fixes problems with -aw option src/cv/ies2rad.c - improvement for direct/indirect fixtures src/cv/source.cal - improvement for direct/indirect fixtures src/px/x11image.c - enhancement of 'h' and 'a' commands 3.1.5 patch files: src/cal/util/Rmakefile - fixes missing compatibility flags src/rt/persist.c - fixes problem with AIX select() differences src/util/netproc.c - fixes problem with AIX select() differences src/common/color.h - fixes problem with AIX macro definitions src/common/lookup.h - fixes problem with AIX macro definitions Instructions for installation are in the README File. Thanks Greg! -Chas --------------------------------------------------------------------------- AUTOCAD AND DXF TRANSLATORS From akienyy@nus.sg Fri Aug 29 11:01:15 1997 Date: Sat, 30 Aug 1997 01:56:39 +0800 To: radiance From: akienyy@leonis.nus.sg (Edward Ng) Hi As a new user of radiance, I have heard people saying that there is a autocad/dxf to radiance translator somewhere written by a Phil Thomson. Any idea where I can locate that or are there other offerings somewhere? Edward ________________________________ ________________________________ Dr Edward Ng ......... reply to (akienyy@nus.sg) School of Architecture, National University of Singapore Kent Ridge Crescent, Singapore 119260 Tel: 65-7723567 Fax:7793078 ________________________________ ________________________________ From radiance Tue Sep 2 13:49:44 1997 Date: Tue, 2 Sep 1997 13:49:43 -0700 From: radiance (Radiance Account) To: akienyy@leonis.nus.sg (Edward Ng) Subject: ACAD to Radiance Cc: radiance Dear Dr. Ng, Although answering this questions always makes me a little nervous because of my personal interest, I will venture to provide an answer that is as complete as possible. Any omissions are unintentional and I am eager to hear about additional solutions to this problem from the Radiance community. There are a few options for exporting and/or translating geometry from AutoCAD and DXF files into Radiance, none of them perfect. The AutoLISP "torad" program is available from the Radiance ftp site: ftp://radsite.lbl.gov/rad/pub/translators/torad.tar.Z Torad is slow because it is AutoLISP and will only run within versions of AutoCAD that support AutoLISP (is AutoLISP still supported in R14?). There is another "dxf2rad" lisp program in the same ftp directory, but as far as I can remember, it does not support as many entity types. The "radout" program is a close cousin of torad and because it is written in "C", it is much faster. If you have the Unix version of AutoCAD on a Sun workstation, then you can get a version of radout which is part of the "DDRAD" package from Georg Mischler at: http://www.schorsch.com/autocad/radiance.html {The last time I checked this site, it wasn't working. Give it a try in a few days before giving up.} And there is a version of "radout" available for AutoCAD Release 12 DOS and Windows3.11. It is mostly the same as torad except it does not support the export of views (this never works well in torad anyway), but, it includes support for CLOSED 3D PLINES. This capability means that any other entity type that degenerates to a 3D PLINE is also supported. This includes most SOLIDS. Sometimes this means "exploding" the higher-order geometry, sometimes even this is not necessary. Radout for DOS/Windows is available for a small fee from: http://www.innernet.com/radiance Lastly, the ADELINE software package contains an MS-DOS executable program that can translate DXF files into Radiance with varrying degrees of success. It will aparently work with ACAD Ver 13, but it does not support the new V13 3D entities. For more information about the ADELINE package, please see: http://radsite.lbl.gov/adeline This is by no means an endorsement for any of these packages and your success may vary. If anyone else would like to be mentioned, please let me know. Happy exporting, -Chas +---------------------------------------------------------------------------+ | Charles Ehrlich Phone: (510) 486-7916 | | Principal Research Associate Fax: (510) 486-4089 | | UC Lawrence Berkeley National Laboratory Contact person for: | | 1 Cyclotron Road MS: 90-3111, Berkeley, CA 94720 RADIANCE and ADELINE | +---------------------------------------------------------------------------+ RADIANCE FOR WINDOWS NT From greg@pink Thu Aug 28 10:09:36 1997 Date: Thu, 28 Aug 1997 10:06:45 -0700 From: greg@pink (Gregory W. Larson) To: radiance Subject: Alpha query >From MREANEY@KUHUB.CC.UKANS.EDU Wed Aug 27 13:56:05 1997 Date: Wed, 27 Aug 1997 15:53:42 -0500 (UTC -05:00) From: Mark Reaney Subject: Radiance on an Alpha box? To: gregl@asd.sgi.com Greg, A novice question. We would like to do some experiments with Radiance, but most of our computers are pretty wimpy. We may have the oportunity to use a very speedy NT Workstation that uses Alpha processors. Does the NT version of Radiance run on Alpha? Is there some way to get the unix version to run on it? Thanks for any advice. Mark Reaney ====================================================================== Mark Reaney, i.e.VR http://www.ukans.edu/~mreaney Institute for the Exploration of Virtual Realities () ___ \\ //|| \\ Dept. of Theatre & Film || //_\\ \\// || // Univ. of Kansas ||() \\__ () \/ || \\ mreaney@ukans.edu ====================================================================== From radiance Tue Sep 2 14:06:34 1997 Date: Tue, 2 Sep 1997 14:06:33 -0700 From: radiance (Radiance Account) To: mreaney@ukans.edu Subject: Radiance on an Alpha box? Cc: radiance Dear Mark, I enjoyed my brief visit to your WWW site. I can see the use of Radiance in your work enhancing the realism you have already achieved with the tools you have at your disposal. A whole audience of people wearing head-mounted displays, eh? Quite impressive! There is a version of Radiance which will run on Wintel machines, but it has not been compiled for Alpha NT. The Wintel version is part of the ADELINE package which is available from LBNL at a cost of $450.00. Please see our WWW pages for more info: http://radsite.lbl.gov/adeline It is not optomized for NT, but does run well as a DOS shell application. In fact, it runs 40 to 70% faster that under DOS 6.22! I also suggest you contact your esteemed colleagues in the lighting department. Professor Moeck and his students did a fantastic job of using Radiance to render a church as part of a student competition. You can find their WWW site at: http://www.arce.ukans.edu/arce/681ld/daylight/dosdoc2.htm Sincerely, -Chas +---------------------------------------------------------------------------+ | Charles Ehrlich Phone: (510) 486-7916 | | Principal Research Associate Fax: (510) 486-4089 | | UC Lawrence Berkeley National Laboratory Contact person for: | | 1 Cyclotron Road MS: 90-3111, Berkeley, CA 94720 RADIANCE and ADELINE | +---------------------------------------------------------------------------+ LIGHTS INSIDE BOXES WITH SMALL HOLES From sunil@teil.soft.net Mon Aug 25 22:34:34 1997 Date: Tue, 26 Aug 1997 10:56:18 -0530 From: Sunil S Hadap Organization: Tata Elxsi (India) Ltd. To: Radiance Account , raydisc, radiance Subject: Set oferflow error! Dear Radiance Users, We are developing Radiance support for Alias and is almost over. We model a sceen for the beta test and we get error "set overflow" somthing related MAXSET which is set 127 currently. The error is at that part of the scene where number of polygons (greater than 256) in small region. In perticular the scene contains a cylender of small size order 10 cm when overall scene is 12x8 meters. I increased MAXSET (sorry) to 1024 it works for some cases but again fails in some other cases. Can any one suggest what is the error and remedy for it. Thanking You Sunil Hadap Tata Elxsi (India) Ltd. fn: Sunil Hadap n: Hadap;Sunil org: Tata Elxsi (India) Ltd. adr: Whitefield Road, Hoody,;;Tata Elxsi (India) Ltd.;Bangalore;Karnataka;560048;India email;internet: sunil@teil.soft.net title: Specialist, Visual Computing tel;work: +91-80-8452016/17/18 tel;fax: +91-80-8452019 ------ From tcvc@indiana.edu Tue Aug 26 09:45:18 1997 Date: Tue, 26 Aug 1997 11:40:34 -0500 (EST) From: "robert a. shakespeare" To: Jean-Louis Maltret cc: tcvc@falstaff.ucs.indiana.edu, raydisc Subject: Re: Set overflow error I suggest that you break the scene into components and create a seperate octree file for each. These component octree files can then be combined with either instance commands or using the -i flag in oconv. Using this method, I have been able to eliminate most set overflow problems and have not had to eliminate small details from huge scenes. -Rob Rob Shakespeare, Director Theatre Computer Visualization Center Room 200, Theatre Building V/Fax: 812-855-8827 Indiana University tcvc@indiana.edu Bloomington, IN 47405 shakespe@indiana.edu http://appia.tcvc.indiana.edu/~tcvc Announcement: RENDERING WITH RADIANCE: THE ART AND SCIENCE OF LIGHTING VISUALIZATION by Greg Ward Larson and Rob Shakespeare Morgan and Kaufmann Publishers (December 1997) -------------- Date: Tue, 26 Aug 1997 08:01:24 -0700 From: greg@pink (Gregory W. Larson) To: Sunil S Hadap Subject: Re: Set oferflow error! Cc: radiance-discuss We are developing Radiance support for Alias and is almost over. We model a sceen for the beta test and we get error "set overflow" somthing related >From Sunil Hadap of Tata Elxsi (India) Ltd. > > MAXSET which is set 127 currently. The error is at that part of the scene > where number of polygons (greater than 256) in small region. In perticular > the scene contains a cylender of small size order 10 cm when overall scene > is 12x8 meters. I increased MAXSET (sorry) to 1024 it works for some cases > but again fails in some other cases. Can any one suggest what is the error > and remedy for it. If you read the oconv manual page, it talks about this error, and the solution is to increase the -r command-line parameter value by factors of two. If you take your overall scene dimension and divide by the smallest region in which you expect to have a compound object, this arrives at he appropriate setting for this parameter. If you run into memory problems with this solution (I could have sworn we just discussed this in the latest digest), you can put your small, complex objects into separate octrees and populate the scene using the instance primitive. This is a really good idea if your small geometry is highly repetitive, like identical pieces of furniture scattered about. -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LIGHTS INSIDE BOXES WITH SMALL HOLES From sunil@teil.soft.net Wed Aug 27 03:29:58 1997 Date: Wed, 27 Aug 1997 15:55:40 -0530 From: Sunil S Hadap Organization: Tata Elxsi (India) Ltd. To: Radiance Account , raydisc, radiance CC: sunil@teil.soft.net Subject: Slightly dirty image, help me to improve! (HTML) Dear Radiance Users, The image is modelled using LightGen tool. But we can't render it better. The rendering is with high quility and spends hours. The rif file is attached. Following points can be of concern * Point lights used are very small (radius .05m) and intensity [254 253 229] * The light sources are spherical and are enclosed in box type enclosure with one face open which throws light. There is no direct lighting in the room except small portion of the room. All the light received by the room is due to interreflections. Mind room color is not green but almost gray. [0.7056 0.72 0.710823] * The rif file says room is with detail medium. * VAR is medium, we use logic: there is no sunlight hence it is not high and we see lightsource directly hence can not be low. * PENUM is FALSE, we don't want jitter to further spoil the picture. We don't understand the spotches, and we can not remove them even if we increase -ad 1024 and -as 512. Also there is strange glow on the edges of the walls and pot looks as if made of florescent material. Shadow below the table is brown ting. We don't think it's a reality. Image in the mirror looks speckled when roughness is very small (though not zero) 0.001 Sorry for HTML mail. I am including the image. view= Camera -vf hallOfVision/hughes_Camera.vf QUALITY= H AMBFILE=hallOfVision/hughes.amb RESOLUTION= 512 ZONE=I -1e-06 4 -1e-06 4.00054 -1e-06 3 PENUMBRA=FALSE render= -t 1 -ab 2 DETAIL=M VAR=M oconv=-n 3 scene=hallOfVision/hughes_l.rad hallOfVision/hughes_s.rad hallOfVision/hughes_o.rad Thank You Sunil Hadap Mehaboob Ali Tata Elxsi (India) Ltd. ---------- From tcvc@indiana.edu Wed Aug 27 07:40:43 1997 Date: Wed, 27 Aug 1997 09:36:00 -0500 (EST) From: "robert a. shakespeare" To: Sunil S Hadap Subject: Re: Slightly dirty image, help me to improve! (HTML) This is just a stab in the dark (no pun intended!) but by locating the -ab 2 in the render option, rad does not take 2x interreflection into account when generating the rendering process. I would suggest removing the -ab 2 from render and including: INDIRECT= 2 Then take a look at the rad variable settings and compare them with what you were running previously.... rad -n -e your.rif By including INDIRECT=2 you will likely find that rad will recreate your ambient file with data from a small image using a setting of -aw 0. Then using this file, your final rendering is produced with a setting of -aw xxx. I would think that you might continue to overide the -ad and -as automatic settings, replacing them with your higher values. Though this is s subtle change, it might improve the outcome. As per the greenish ambient light and glowing quality: ... I assume that you are using a white light source. As the wall is slightly weighted to green and blue, the greenish hue might be due to overexposure. I would expect that the blue vase, though insignificant to a broadly lighted scene, will effect the scene in your case. Try a red vase and see what changes!! I would not expect the undertable to appear so "bright" so again, you might reduce the exposure a stop, which in turn, might help to elliminate the "glowing" quality. I would be interesting to see how pcond would modify the appearance of the image... after rerendering you might try: pcond -h- test.pic > testpd.pic Rob Shakespeare, Director Theatre Computer Visualization Center Room 200, Theatre Building V/Fax: 812-855-8827 Indiana University tcvc@indiana.edu Bloomington, IN 47405 shakespe@indiana.edu http://appia.tcvc.indiana.edu/~tcvc Announcement: RENDERING WITH RADIANCE: THE ART AND SCIENCE OF LIGHTING VISUALIZATION by Greg Ward Larson and Rob Shakespeare Morgan and Kaufmann Publishers (December 1997) --------- Date: Wed, 27 Aug 1997 09:05:43 -0700 From: greg@pink (Gregory W. Larson) To: Sunil S Hadap Subject: Re: Slightly dirty image, help me to improve! (HTML) Cc: radiance-discuss The main problem is sounds like your light sources. You must instead of placing your spheres in boxes and expecting the interreflection calculation determine their output, use small polygons at their openings and eliminate the spherical sources. This will cure most of your problems. You may have to manually set the -aw parameter to 0 in the "render" variable of rad if you still see strange glowing edges. -Greg ---------------------------- USING MKILLUM FOR SIDELIGHTED OFFICE From Markku.Norvasuo@vtt.fi Tue Aug 12 04:39:46 1997 Date: Tue, 12 Aug 1997 14:37:50 +0300 (EET DST) To: ckehrlich@lbl.gov From: Markku Norvasuo Subject: Adeline/Radiance Hi Charles, The Adeline package has been delightful and well working. However, getting acquainted requires some time as you said. Concerning Radiance I have a few questions: 1) I have been modeling a sidelit office space. Is it reasonable to define window surfaces as illum sources if blinds (from genblind) are at the same time used beside them? I.e. does this arrangement behave correctly? 2) Should I use illum sources, anyway, to include sky glow in the interior illumination, or is this glow sufficiently included if the option -ab > 0? 3) I put the office scene in the mkillum files box in the Adeline/Rad user interface. When the original octree was named R013.oct, rview generated two additional ones, R0130.oct and R0131.oct. If I use rpict instead from a command line, is it correct to give the last one (R0131.oct) as the input octree? (I did not find a clear advice in the manual but this seems to work). Do you have a mailing list similar to Radiance Digest or should I subscribe the latter one? Thank you for advice Markku ________________________________________________________________ Markku Norvasuo, Senior Research Scientist Technical Research Centre of Finland, Building Technology Postal address: VTT Building Technology, P.O.Box 1801, FIN-02044 VTT, Finland Tel. +358 9 456 6269 (office), +358 40 515 1100 (mobile) Fax +358 9 456 6251, Internet: markku.norvasuo@vtt.fi ------------ From chas@pink Tue Aug 12 09:38:27 1997 Date: Tue, 12 Aug 1997 09:35:35 -0700 From: chas@pink (Charles Ehrlich) To: Markku Norvasuo Subject: Re: Adeline/Radiance Cc: radiance@pink Markku, I'm glad to hear that you are not having as much problems with ADELINE as some of my other users. 1. The main purpose of the mkillum feature is to provide a means to define an average distribution for complex fenestration. Therefore, the most efficient way of modeling a sidelit office with blinds is to put the blinds on the "outside" of the illum surface. This may require that you create an "imposter" polygon with material type "void" which serves this purpose. It can sometimes be challenging to find ways to create the imposter geometry in a way that minimizes the potential inaccuracies which can be introduced with the use of illum. In particular, if direct sunlight strikes only a portion of your "imposter" or "illum" surface, the distribution for the whole window will be the average, but this may not produce satisfactory results. 2. In most cases you will want to use mkillum to define the diffuse component of the sky when you have complex glazing (blinds, light shelf, etc.). If you have a more simple glazing (no blinds), then ab > 0 works fine. In effect, using mkillum reduces the number of bounces necessary to acheive adequate results by one. Unfortunately, mkillum can sometimes cause "banding" when very close to the illum. 3. The "rad" command handles the two-step octree generation in the way you discovered. The latter octree will contain the scene geometry plus the glazing or imposter surfaces which have been sampled by mkillum. 4. I have been thinking about the fact that the promised user support e-mail list for ADELINE never coagulated, mostly because I was doubtful that there were enough users of the software to make it work. Perhaps now that I am in charge of the Radiance e-mail lists, I could combine ADELINE with the rest of the Radiance users. Perhaps I could also create a separate list which is for ADELINE-specific needs like Superlite, scribe, plink, etc, and keep the discussion of Radiance issues open to ADELINE users in the Radiance forum. Hmmmm. Good luck. Please share with me some images when you have something satisfactory to show. -Chas Charles Ehrlich Principal Research Associate ------------------------------------- LARGE AREA SOURCE GLARE CALCULATIONS From haico.schepers@arup.com Mon Jul 21 22:49:28 1997 Date: Tue, 22 Jul 1997 05:41:00 +0000 From: Haico Schepers To: greg Subject: Re: Can you help with this one? -Reply Greg Thank you for your last reply, I've been reading the journals you referenced in the manual and it seems to imply that the program does take into account large sources. I tried to email LESO in Switserland for confirmation at lesoweb@lesosun1.epfl.ch, but I have had no success getting through. Do you know of a person or email account I can contact to ask these questions to? <> >From a technical stand point it would be of great benifit to know what algorithns radiance uses to calculate glare indices. Also with reference to your reply; > 1/ I can't find xglaresrc to run the visualization program for Glare Doesn't exist in ADELINE, since it's tied to X11. Is there any way to get a copy of the script used for this program and can I update Adeline2 at all to run this script? I recently read your paper on A Visiblity Matching Tone Reproduction Operator for High Dynamic Range Scence. Our company is largely using radiance to model natural daylight for passive solar design. We are greatly concerned with in occupant comfort (with regard to glare) and their perception of the work place environment thus any developments in how a scene is veiwed is of great interest. Can you inform me of any developments in this feild and whether it has been or will be incorporated into adeline2. Any information on this topic would be greatly appreciated. With kind Regards Haico Schepers ----------- >From greg Tue Jul 22 09:51:44 1997 Date: Tue, 22 Jul 1997 09:51:43 -0700 From: greg (Gregory W. Larson) To: Haico Schepers Subject: Re: Can you help with this one? -Reply Hi Haico, I dug up an old document I'd written in 1992 after I wrote the glare script and programs in Radiance, and I put it on our mirror web site (still under development). You may access it directly over the web at: http://radsite.lbl.gov/mirror/radiance/man_html/Notes/glare.html This should get you partway there, but for the full scoop on daylight glare calculations using Radiance, you should talk to Raphael Compagnon, who is on sabbatical at Cambridge University, and may be reached at "rjrc2@cam.ac.uk". There really is no easy way to incorporate the xglaresrc program into Adeline, since it relies on X11 to circle glare sources in a displayed image. It's not as simple as translating a script, and I doubt anyone is putting the work into porting this particular utility, though you can write to Charles Ehrlich at "radiance@floyd.lbl.gov" if you want to ask about it. While you're at it, you can also ask him about the porting of pcond (the tone reproduction program I wrote for 3.1) to Adeline. There is no reason in principle why this should pose a problem, but the development plans on the PC-compatible platform are still in flux and I have nothing further to do with them as I no longer work at LBNL. -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PROCEDURAL MODELING WITH GENSURF From panop@xarch.tu-graz.ac.at Tue Jul 8 05:07:27 1997 From: "Christos Panopoulos" To: greg (Gregory J. Ward) Date: Tue, 8 Jul 1997 14:05:42 +0000 Subject: create a quarter cycle Hi Greg, how can I create surface of cycle segment (angle = PI/2) ? And then extrude this.Thank you in advance. Christos Christos Panopoulos E-Mail:panop@xarch.tu-graz.ac.at http://xarch.tu-graz.ac.at/~panop >From greg Tue Jul 8 09:29:42 1997 Date: Tue, 8 Jul 1997 09:29:41 -0700 From: greg (Gregory W. Larson) To: panop@xarch.tu-graz.ac.at Subject: Re: create a quarter cycle Use the gensurf command. The x(s,t) y(s,t) and z(s,t) are independent parametric functions for surface indices (s,t) each running between 0 and 1. Define them however you like. -Greg _____________________________________________________________________ Gregory Ward Larson (the computer artist formerly known as Greg Ward) Silicon Graphics, Inc. Computer Science Department 2011 N. Shoreline Blvd., M/S 07U-553 537 Soda Hall, UC Berkeley Mountain View, CA 94043-1389 Berkeley, CA 94720-1776 (415) 933-4878, -2663 fax (510) 642-3631, -5775 fax gregl@sgi.com on Tues., Thurs. and Fri. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ AQUARIUM ANALYSIS From chrisg@GLASS.lplizard.com Fri Sep 5 10:03:27 1997 Date: Fri, 5 Sep 1997 12:58:24 -0400 From: Chris Green To: radiance Organization: Leaping Lizard Software, Inc. (301)-963-8230 FAX (301)-963-9016 Do you know if anyone has ever used radiance to render an aquarium environment? I am designing a very large salt-water reef aquarium aquarium for my home, and lighting is a big issue. I wish to be able to try out different combinations of halide bulbs of differing wattages and color temperature with different reflector types, and see the lighting effect on the tank and the room viewing it. The surface ripples of the water also have a big effect on the way the light looks. It's kind of a worst-case caustic generator. Things I'd like to be able to do: See some pictures that would show what it might look like. Try different bulbs in different combos and placements. I'm not sure if the data needed by the simulation is available for the higher temperature halide bulbs used for SW aquariums. Know the amount of PAR (photosyntehtically active radiation) hitting a given point inside the tank. This would be used to figure out what areas were adequate for which kinds of corals. PAR is the number of photons hitting an area that are in the photsynthetically active wavelengths. Because of the ripples, I'd need it averaged over time. Try different types of glass for the front panel to see how whether I need to use special fancy glass to not have too much of a greenish tint. I could also see the effect of different coverings for the back and sides. Plan the layout of the reef structure in the tank and be able to see the shadow effects. I would appreciate anything that you could tell me about the feasability of this particular application, or if you know of anyone who has done anything similar. Thanks, -- Chris Green ------------------------------------------------- Dear Chris, Radiance is probably as close as you'll come to a tool for the simulations you need. Short of a full-blown forward ray tracer like Breault Research's ASAP software (which is no easier to use than Radiance and doesn't produce nice images), the one limitation of Radiance that you'll have to live with is the caustics. Radiance can do a wavy surface of water, but you won't know if you have a problem with high concentrations of light where light rays are converging. This seems like a minor effect to me because the surface of the water in aquaria are more or less flat, right? Of more interest might be the bubbles. Greg's "podlife" sculpture is a good example of a model that uses water and bubbles. It can be found at: ftp://radsite.lbl.gov/pub/models/podlife.tar.Z The issue of caustics came up a year or two ago and is documented either in the Radiance Digests or Discussion group archives which can be found at: ftp://radsite.lbl.gov/pub/digest ../discuss Search for caustic and you should find more information. A great improvement in Radiance since the digest postings is that you can now model single-scatter "participating media" which in the case of an aquarium would be particles of dirt and/or haziness in the water. This material entity is called "mist." What you'll need to do for the photon counting business is develop a correlation between the photosyn-whatever sensitivity you're after and Radiance's Red, Green and Blue spectral samples. It will be easiest to represent this value as an illuminance, so you'll have to correlate numbers of rays to illuminance as well. Take a look at the file called rlux inside the Radiance distribution. It is a C-shell script which calculates photopic illuminance. You'll have to tweak the multipliers for R, G and B to get what you want. Your other tasks all seem very reasonable. The most difficult task will be modeling the geometry of the coral. And, once you've gone through all of this trouble to develop an aquarium analysis tool, the great thing about Radiance is that it can be licensed for worldwide distribution at a cost of $10,000.00 so that you could sell it to other aquarium afficionados, if you're into that sort of thing. Good luck, -Chas +---------------------------------------------------------------------------+ | Charles Ehrlich Phone: (510) 486-7916 | | Principal Research Associate Fax: (510) 486-4089 | | UC Lawrence Berkeley National Laboratory email: ckehrlich@lbl.gov | | 1 Cyclotron Road MS:90-3111, Berkeley, CA 94720 Contact person for: | | http://radsite.lbl.gov/radiance ../adeline RADIANCE and ADELINE | +---------------------------------------------------------------------------+ Date: Fri, 5 Sep 1997 11:09:45 -0700 From: greg (Gregory W. Larson) To: Chris Green Subject: Re: radiance and aquarium design The sad answer is no, Radiance would not do the right thing, no matter how you modeled the surface of the water. Its backwards ray-tracing algorithm depends on knowing the locations of the light sources in order to find them, especially if they're nearly point sources! There's very little hope with the current version of simulating this environment correctly. Every system has it's Achille's heel, and this is it for Radiance. You would need some kind of bidirectional ray-tracing system to do what you want. You might investigate Arris Inspirer as a possible solution to your problem, or perhaps Specter, both from Integra corporation (http://www.integra.co.jp/eng/). Good luck! -Greg MODELING WIRE MESH Note: this message is being forwarded to the entire Radiance discussion group instead of the Radiance Modeling group because in the transition from Greg to Chas, the "radiance-model" list has been lost. My appologies. --------------- Date: Wed, 10 Sep 1997 07:59:31 -0500 From: "T. Prater" To: "'radiance-model@radsite.lbl.gov'" Subject: Wire Mesh Hello Radiance users, Recently we have been trying to use Radiance to simulate the lighting conditions inside a small 'chamber'; this chamber will eventually be used to grow a plant. Two sides of the box will be covered in stainless steel wire mesh. My question is: how do I model this wire mesh? Are there any preexisting material definitions that would be useful in this situation? Really, the most important properties to us would be how much light (overall) is transmitted through and reflected back from the surface of the mesh (in other words, we are not that concerned with the subtle effects of indivi- dual wires in the mesh). We have conjectured that the BRTDfunc material would be the best option. Does this sound right? If so, I need a bit of help with that material. Are the 'rfdif', 'gfdif', ...'btdif' parameters used internally by Radiance? Or, are you supposed to use them in your function file? What, intuitively, do these quantities correspond to? How about physically? Thanks in advance for any help you can give... Todd Prater Dept. of Agronomy Kansas State University Manhattan, KS 66506 ---------------- A reply from Chas: Hello Todd, With regard to your questions about a growing chamber and the modeling of wire mesh. While BRTDfunc would work, you would have to subject your wire mesh sample to some quite rigorous bi-directional reflectance and transmittance tests in order to adequately utilize this material. I would suggest instead, that you try the trans or trans2 materials. The benefit of the trans_ materials is that a light-source ray is traced from both the reflected and transmitted ray directions (that is, a direct ray is used to calculate both the transmitted specular and reflected specular component.) While neither BRTDfunc or trans_ are simple to understand, you can find examples of the use of each in the Radiance digests and/or discussion group archives on our WWW site. Have you spoken with your distant cousins at KU? The Prof. Martin Moek in the lighting department has used Radiance extensively. He might be able to offer you some additional advise. Another posting to the discussion group which I will be forwarding deals with the modeling of an aquarium. The individual is interested in calculating the amount of photosynthetic light reaching a particular surface. While I don't have the exact equation for this, you might also be concerned with this quantity. I suggest that you contact him and possibly collaborate and let me know what you discover. Good luck, -Chas -------------------------- Date: Wed, 17 Sep 1997 17:45:32 -0700 From: greg@pink (Gregory W. Larson) To: radiance (Radiance Account) Subject: Re: Wire Mesh Cc: radiance-discuss, radiance, squid@ksu.edu I would go with Chas' recommendation to use the "trans" type as the most efficient solution. It may not reproduce the individual mesh elements, but neither would BRTDfunc. If you know the basic reflectance of your mesh material and the percentage of mesh to air in it, you can compute the parameters for the trans type. See the URL: http://radsite.lbl.gov/mirror/radiance/digests_html/v2n10.htm#TRANS_PARAMETERS for some more hints. -Greg --------------------------------- SPECULAR LIGHT SHELF Date: Tue, 16 Sep 1997 18:13:16 +0200 (MET DST) To: radiance-discuss From: Krzysztof Wandachowicz Subject: light shelf Dear Radiance users, I'm working on calculation of daylight in office room with internal reflective shelf. My shelf looks like long (it is on the whole length wall with windows) and wide (1 m) window sill (I don't know if the "window still" is the right definition, I have found in dictionary two different definitions: "window stool" and "parapet wall"). This shelf is only first easy model of my system of daylight. I have more complicated idea to direct the light from sun and sky to the ceiling, but I have used this shelf to test the material. I need two different reflective materials: diffuse (Lambertian characteristic) and specular (or/and semispecular) reflection. I have tested the Greg's material (e-mail 12 Sep 95 from Greg to manuel@ise.fhg.de) BRTDfunc for partially transparent shelf 55% reflection 25% transmission (below). void BRTDfunc blind_grey 10 if(Rdot,.55,0) if(Rdot,.55,0) if(Rdot,.55,0) .25 .25 .25 0 0 0 . 0 9 0 0 0 0 0 0 0 0 0 void mirror blind_illum 1 blind_grey 0 3 .55 .55 .55 This example is good for specular reflection and direct transmission, but how to change this description in order to obtain diffuse and semidiffuse characteristic. I try many times without success. So, for diffuse reflection I chose "plastic" and "mirror": void plastic blind_grey 0 0 5 .7 .7 .7 0 0 void mirror blind_illum 1 blind_grey 0 3 .7 .7 .7 Plastic type in this example describes diffuse reflection, mirror type supports secondary light sources. I have made calculation for sunny day at noon with windows on the south side. The light from sun and sky reflects in my shelf and it is directed to the ceiling. This is diffuse reflection and the ceiling should be illuminated almost uniformly but picture looks differently.I see on the ceiling the virtual image of my windows with high luminance. This is the light from the sun, the light from sky reflects in shelf by directional way too. It strange because if I look at the ceiling I think that shelf reflects daylight by directional way however if I look directly at the shelf I'm sure that shelf reflects daylight by diffuse way. I don't know if "mirror" is the right type of material but it is only type for supports secondary light source. I have tried with "illum" type but without success. The other difference is that the change of material type and turn off or on sun doesn't influence the distribution of luminance. Time for the question. How describe diffuse reflection of such shelf? Now more complicated and additional question how to change the Greg's description of BRTDfunc (above) to diffuse and semi diffuse reflection and transmission. I will be grateful for any suggestions. The desperate Radiance user, Krzysztof Wandachowicz. ############################################ # Krzysztof Wandachowicz # # Poznan University of Technology # # Division of Lighting Engineering # # PL-60-965 Poznan, ul.Piotrowo 3a, Poland # # fax +48 61 8782389 # ############################################ -------------- Date: Tue, 16 Sep 1997 17:24:15 -0700 From: martin@color.arce.ukans.edu (Martin Moeck) Subject: reflections To: radiance-discuss In response to Krzysztof Wandachowicz's questions, his modeling approach works fine for me. The diffuse component shows up on the ceiling when considering void plastic blind_grey 0 0 5 .7 .7 .7 0 0 and the specular component shows up when modeling a shelf of void mirror blind_illum 0 0 3 .7 .7 .7 without mixing the materials. If you mix them by defining void plastic blind_grey 0 0 5 .7 .7 .7 0 0 void mirror blind_illum 1 blind_grey 0 3 .7 .7 .7 both the diffuse component from the plastic and the specular component from the mirror are there. These components from the sun only work just fine. If you model the diffuse sky, the situation is a little bit more complicated. The ambient bounces from the diffuse plastic material are ok, but about the specular mirror and its effect on reflecting skylight I'm not so shure. It seems that the sky would have to be mimicked by creating an array of lights having the sky luminances of the corresponding patches. If you do void light skyglow 0 0 3 .96 .96 1.2 skyglow source sky 0 0 4 0 0 1 180 or something like that, a sharp image on the ceiling of that source is created. That does not do the job, because one light source with a center is assumed. Martin Moeck mmoeck@ukans.edu 'EXPOSING' IMAGES FOR INDOOR AND OUTDOOR BRIGHTNESSES From el2gasae@uco.es Wed Oct 8 05:52:35 1997 Date: Wed, 8 Oct 1997 14:49:10 +0200 From: Enrique Garcia Salcines To: greg Subject: A question Hi Greg, I'm working with Radiance in illumination design. I have a problem with the sun, the sky and the ground description that appear below. The model is a simple office with a great glass window. I have used the rad program with a view point inside the room. The problem is when I see through the window, the sky is not blue and the ground is not brown that is the RGB of these glow sources, all appear in white color like a very strong sun penetrating inside the room, the gensky options were "gensky 10 8 13". void light solar 0 0 3 6.76e+006 6.76e+006 6.76e+006 solar source sun 0 0 4 -0.032656 -0.711654 0.701771 0.5 void brightfunc skyfunc 2 skybr skybright.cal 0 7 1 1.15e+001 2.17e+001 5.64e-001 -0.032656 -0.711654 0.701771 skyfunc glow sky_glow 0 0 4 0.90 0.90 1.15 0.00 sky_glow source sky 0 0 4 0 0 1 180 skyfunc glow ground_glow 0 0 4 1.40 0.90 0.60 0.00 ground_glow source ground 0 0 4 0 0 -1 180 void glass Glxxx0 0 0 3 0.950000 0.950000 0.950000 skyfunc brightfunc window_dist 2 winxmit winxmit.cal 0 0 window_dist illum Glass01 1 Glxxx0 0 3 0.88 0.88 0.88 Glass01 polygon cristal.0 0 0 12 ......the coordinates ---------------------------------------------------- Thanks in advance, From: greg (Gregory W. Larson) Message-Id: <199710081618.JAA03709@pink.lbl.gov> To: Enrique Garcia Salcines Subject: Re: A question Status: R The reason the ground and sky appear white is due to exposure. Just as in a photograph, it is very difficult to obtain an exposure that correctly presents both interior and exterior surfaces in their apparent colors. Try using the pcond program on your rendered picture. This program is designed to compensate for these dynamic range limitations in the display, and the result should be better in terms of perception. -Greg +---------------------------------------------------------------------------+ | PLEASE AVOID THE EVIL REPLY COMMAND, AND ADDRESS YOUR E-MAIL EXPLICITYLY! | +---------------------------------------------------------------------------+ | Radiance Discussion list (unmoderated): radiance-discuss@radsite.lbl.gov | |*Radiance Digest mailing list (moderated): radiance@radsite.lbl.gov | | Radiance Modeling list (unmoderated): radiance-model@radsite.lbl.gov | | Requests to subscribe/cancel: radiance-request@radsite.lbl.gov | | Archives available from: ftp://radsite.lbl.gov/rad/pub/digest | | Radiance world-wide web site: http://radsite.lbl.gov/radiance/ | +---------------------------------------------------------------------------+