Wed, 19 Mar 2003
Table of Contents
PVMPOV = PVM + POV-Ray
PVM is a message passing system that enables a network of computers to be used as a single distributed memory parallel computer. This network is referred to as the Parallel Virtual Machine.
POV-Ray is a 3-dimensional raytracing engine. It takes information you supply and simulates the way light interacts with the objects you've defined to create stunning 3d pictures and animation. This process is called rendering.
PVMPOV has the ability to distribute a rendering across multiple heterogeneous systems. Parallel execution is only active if the user gives the “+N” option to PVMPOV. Otherwise, PVMPOV behaves the same as regular POV-Ray and runs a single task only on the local machine.
Using the PVM model, there is one master and many slave tasks. The master has the responsibility of dividing the image up into small blocks, which are assigned to the slaves. When the slaves have finished rendering the blocks, they are sent back to the master, which combines them to form the final image. The master does not render anything by itself, although there is usually a slave running on the same machine as the master, since the master doesn't use very much CPU power.
If one or more slaves fail, it is usually possible for PVMPOV to complete the rendering. PVMPOV starts the slaves at a reduced priority by default, to avoid annoying the users on the other machines. The slave tasks will also automatically time out if the master fails, to avoid having lots of lingering slave tasks if you kill the master.
The PVM patches to POV-Ray are very easy to install. The entire operation should take only a few minutes once you have the source code. But before compiling PVMPOV you must be sure to have the PVM library installed correctly.
|PVM is not included in the PVMPOV distribution. You have to download and install it manually. Get it from the PVM home page. Also note that the PVMPOV patch only works with the unix sources of POV-Ray.|
The following files are required to build and run PVMPOV:
If you already have another version of POV-Ray 3.1 installed on your computers, then you only need to download pvmpov-3.1g2.tgz and povuni_s.tgz.
You should put these files someplace easily accessible that has at least 15Mb of free space; for the purposes of the rest of these examples, we will presume the sources are in $HOME/tmp, which is shared across all computers:
$ cd $HOME/tmp $ wget -q http://prdownloads.sourceforge.net/pvmpov/pvmpov-3.1g2.tgz $ wget -q ftp://ftp.povray.org/pub/povray/Old-Versions/Official-3.1g/Unix/povuni_s.tgz $ wget -q ftp://ftp.povray.org/pub/povray/Old-Versions/Official-3.1g/Unix/povuni_d.tgz
Untar the PVMPOV patch file.
$ tar xfz pvmpov-3.1g2.tgz
This will create a pvmpov3_1g_2 directory. Change into the this directory and extract the POV-Ray source files.
$ cd pvmpov3_1g_2 $ tar xfz ../povuni_s.tgz $ tar xfz ../povuni_d.tgz
Once the source files have been extracted, apply the PVMPOV patch by executing the “inst-pvm” shell script.
$ ./inst-pvm Trying to apply the patch. Searching for rejected files $If you see nothing listed between the “Trying to apply ...” and “Searching for ...” lines, the patch was successfully applied to the POV-Ray sources, and you can continue to build the modified sources.
If there are problems with the patch (for example, some of the patches are misaligned with regard to the current version of the source), you will get error messages from the patch program. If this happens, all is not lost. It's pretty easy to look at the .rej files and then compare them to the sources and insert the patches by hand. The “inst-pvm” shell script just makes things a little more convenient.
Another common problem is, that people are not using the the GNU patch utility, which should be on most UNIX systems. Other patch programs may not work.
After the patch has been applied successfully you can build the PVMPOV binaries. Change into the povray31/source/pvm directory and type “aimk newunix”. (“aimk” is a wrapper program for “make”, used to portably select options to build PVM and PVM applications on various machines. “aimk” is part of the PVM suite.) When the compilation finishes, compile the display capable versions of PVMPOV by executing “aimk newsvga” and “aimk newxwin”. The binaries will then be in povray31/sources/pvm/$PVM_ARCH.
When the compilation failes, make sure, that you have set up PVM correctly (i.e.: that the environment variables PVMROOT and PATH fit your requirements.)
$ cd povray31/source/pvm $ aimk newunix making in LINUX/ for LINUX /home/flierl/tmp/pvmpov3_1g_2/povray31/source/pvm/LINUX rm -f ./povray.o ./render.o ./userio.o ./vbuffer.o pvm.o cp ../../unix/unixconf.h config.h (cd ..; aimk unix) making in LINUX/ for LINUX /home/flierl/tmp/pvmpov3_1g_2/povray31/source/pvm/LINUX make: Entering directory `/home/flierl/tmp/pvmpov3_1g_2/povray31/source/pvm/ LINUX' gcc -O6 -ansi -finline-functions -ffast-math -c -Wall -DCOMPILER_VER=\".`uname` .gcc\" -I. -I.. -I../.. -I../../unix -I../../libpng -I../../zlib -I/usr/X11R6/i nclude ../../atmosph.c -o atmosph.o ... gcc -O6 -ansi -finline-functions -ffast-math -c -Wall -DCOMPILER_VER=\".`uname` .gcc\" -I. -I.. -I../.. -I../../unix -I../../libpng -I../../zlib -I/usr/X11R6/i nclude ../../unix/unix.c -o unix.o gcc ./atmosph.o ./bbox.o ./bcyl.o ./bezier.o ./blob.o ./boxes.o ./bsphere.o ./c amera.o ./chi2.o ./colour.o ./cones.o ./csg.o ./discs.o ./express.o ./fractal.o ./gif.o ./gifdecod.o ./hcmplx.o ./hfield.o ./iff.o ./image.o ./interior.o ./la the.o ./lbuffer.o ./lighting.o ./matrices.o ./media.o ./mem.o ./mesh.o ./normal .o ./objects.o ./octree.o ./optin.o ./optout.o ./parse.o ./parstxtr.o ./pattern .o ./pgm.o ./pigment.o ./planes.o ./png_pov.o ./point.o ./poly.o ./polygon.o ./ polysolv.o ./povray.o ./ppm.o ./prism.o ./quadrics.o ./quatern.o ./rad_data.o . /radiosit.o ./ray.o ./render.o ./sor.o ./spheres.o ./super.o ./targa.o ./textur e.o ./tokenize.o ./torus.o ./triangle.o ./truetype.o ./txttest.o ./userio.o ./v buffer.o ./vlbuffer.o ./warps.o ./pvm.o ./unix.o /usr/lib/pvm3/lib/LINUX/libpvm 3.a /usr/lib/pvm3/lib/LINUX/libgpvm3.a -L../../libpng -lpng -L../../zlib -l z -lm -o pvmpov make: Leaving directory `/home/flierl/tmp/pvmpov3_1g_2/povray31/source/pvm/L INUX' $ aimk newsvga ... $ aimk newxwin ...
As the case may be, the PVMPOV binaries can be installed in lots of ways.
$ su - $ cd /home/flierl/tmp/pvmpov3_1g_2/povray31/source/pvm $ aimk installThis will copy the binaries to $PVM_ROOT/bin/$PVM_ARCH and create symbolic links in /usr/local/bin.
The following is an example from Jason Hough, and was generated on a group of six Solaris based 4-processor SPARCstation 20s. His home directory is NFS mounted to all of these hosts.
You first must have a PVM daemon launched on each host that will be participating in the rendering. Create a file called pvm.hosts which should contain some information needed for the pvm daemon to run. Refer to the PVM documentation (“man pvmd3”) to get more info about PVM's host file format.
Jason Hough keeps the pvm daemon installed in a directory called “bin”, given by “dx=./bin/pvmd3” relative to his home directory, and PVMPOV is in various subdirectories under “bin” (ie “bin/SUN4”, “bin/SUNMP”, “bin/LINUX”, etc.), given by the executable path “ex=./bin”, so his pvm.hosts file looks like:
$ cat $HOME/pvm.hosts glee dx=./bin/pvmd3 ep=./bin elation dx=./bin/pvmd3 ep=./bin ecstasy dx=./bin/pvmd3 ep=./bin bliss dx=./bin/pvmd3 ep=./bin delight dx=./bin/pvmd3 ep=./bin rapture dx=./bin/pvmd3 ep=./bin
The following command launches the PVM daemons:
$ pvm pvm.hosts 3.3.7 t40001 pvm> conf 6 hosts, 1 data format HOST DTID ARCH SPEED glee 40000 SUNMP 1000 elation 80000 SUNMP 1000 ecstasy c0000 SUNMP 1000 bliss 100000 SUNMP 1000 delight 140000 SUNMP 1000 rapture 180000 SUNMP 1000 pvm> quit pvmd still running. $
Type “quit” at the PVM prompt to exit the PVM interface and leave the PVM daemons still running.
Now that the PVM daemons are up and waiting for work to do, we can render.
POV-Ray needs object script files (.pov) to raytrace, and there are many places on the Internet you can obtain .pov files from. For your first rendering, you may want to check out the POV-Ray benchmarking site POVBench and get the skyvase.pov file. This file is used to benchmark and compare computers of varying designs and can provide a way of measuring your parallel virtual machine's performance.
Note that for these multi-processor machines Jason Hough forces PVMPOV to start more tasks than the default 1 per host, and uses a 64x64 block size:
$ pvmpov +Iskyvase.pov +Oskyvase.tga +NT24 +NW64 +NH64 +v POV-Ray Options in effect: +v1 +ft +mb25 +NT24 +NN5 +NW64 +NH64 +a0.300 +j1.000 +b999 +r3 -q9 -w1024 -h768 -s1 -e768 -k0.000 -mv2.0 +Iskyvase.pov +Oskyvase.tga ...at least 13 tasks successfully spawned in time. ...Don't worry, more are on the way, I'm just not waiting PVM Task Distribution: Tasks-24 Grid width-64 Grid height-64 Sections-192 Waiting for slave stats. PVM Task Distribution Statistics: host name [ done ] [ late ] host name [ done ] [ late ] glee [ 4.17%] [ 0.00%] glee [ 4.17%] [ 0.00%] glee [ 4.17%] [ 0.00%] glee [ 4.17%] [ 0.00%] elation [ 4.69%] [ 0.00%] elation [ 4.17%] [ 0.00%] elation [ 4.17%] [ 0.00%] elation [ 4.17%] [ 0.00%] ecstasy [ 3.65%] [ 0.00%] ecstasy [ 4.69%] [ 0.00%] ecstasy [ 4.69%] [ 0.00%] ecstasy [ 4.17%] [ 0.00%] bliss [ 3.65%] [ 0.00%] bliss [ 4.17%] [ 0.00%] bliss [ 4.69%] [ 0.00%] bliss [ 3.65%] [ 0.00%] delight [ 3.65%] [ 0.00%] delight [ 4.17%] [ 0.00%] delight [ 4.17%] [ 0.00%] delight [ 4.17%] [ 0.00%] rapture [ 4.69%] [ 0.00%] rapture [ 4.17%] [ 0.00%] rapture [ 4.17%] [ 0.00%] rapture [ 3.65%] [ 0.00%] skyvase.pov statistics -------------------------------------- Resolution 1024 x 768 # Rays: 3773743 # Pixels: 798720 # Pixels supersampled: 17381 Ray->Shape Intersection Tests: Type Tests Succeeded Percentage ----------------------------------------------------------- Sphere 6304452 1170727 18.57 Plane 63822062 35385552 55.44 Quadric 6304452 2770858 43.95 Cone 5918163 4839298 81.77 Bounds 5918163 3152226 53.26 Calls to Noise: 4327871 Calls to DNoise: 5141872 Shadow Ray Tests: 10498615 Blocking Objects Found: 254807 Reflected Rays: 2818594 Time For Trace: 0 hours 0 minutes 47.00 seconds
|Note that for comparison purposes with other skyvase benchmarks that this is rendered at 1024x768 instead of the usual 640x480.|
The following bugs and limitations are known:
$ pvmpov +NASUNMP +NT24 +N +Iskyvase.pov +Oskyvase.tga
Have a look into the PVM book.
It is important to note that by varying the size of the grid sections, you can affect the performance of the rendering. If you have particular renderings that are very complex in a small portion of the display, then a finer grain may help. In this way, more of the tasks are able to migrate towards the grid sections that are more complex. Conversely, if you have a shorter render or a slower network, it may be advantageous to have larger blocks to reduce network overhead, as well as ensure the slaves are not idle waiting for blocks to render.Another thing to mention is that you should not use PVMPOV for short renderings (e.g. 10's of seconds) as this is slower than on one fast machine. You must also consider overhead if using Antialiasing. Antialiasing requires the line segment above and below the grid section to be traced so that super-sampling may occur. If the height of the grid is reduced in size, and Antialiasing is turned on, your percentage of overhead goes up. For example, setting a height of four (“+NH4”) using Antialiasing would have more than 25% overhead. If the image size is not an integer multiple of the grid size, the edge blocks are smaller (ie extra pixels aren't rendered), so it is not necessary to evenly divide the image into blocks.
Rename them to lib.notused and compile PVMPOV with the libraries already available on your system. Saves you the errors from POV-Ray that it is using other libraries than it was compiled against.
Valid POV-Ray command-line options relating to PVM are:
This is the default for starting PVMPOV: one slave will be started on each available host, regardless of architecture, and the blocks will be 32x32 pixels in size. The slaves will be started with a nice value of 5, which means they will run at a lower priority than other user jobs.
Turns of PVM support. PVMPOV now runs exactly as normal POV-Ray.
Start n tasks on the available PVM hosts. This is usually only useful if debugging on a single machine, or for starting more than 1 task on multi-processor hosts. If, for example, you have 10 machines with 4 CPUs each, you could specify +nt40 to start 4 processes on each host (and the OS will hopefully run 1 on each CPU).Note that PVM is stupid in the way it starts tasks, so if, in the previous example, one of the hosts has only one CPU, it will still have 4 slaves started on it. You can use the “pvm_hosts” option (see below) to control on which machines the tasks are started. Starting multiple tasks on a single processor will always be less efficient than a single task because of context switching and extra message passing.
Start the tasks only on the PVM architecture “arch”. If “+NT” is not given, one task will be started on each of the hosts of the given architecture.
Run the slaves at a scheduling priority of “n”. The default scheduling priority is 5. In general, changing the priority value will not affect performance very much, but may get others upset with you. The nicest setting for PVMPOV is 20, while the least nice setting is 0. Note that these values are always used even on systems that use scheduling priority values from 20-40. See the installation document and the info page for nice for more information.
Change the width of the blocks to “n” pixels. The default width is 32 pixels.
Change the height of the blocks to “n” pixels. The default height is 32 pixels.
Uses “slave” as filename for the slave tasks. If you do not specify this option PVMPOV will use the current executable name also for the slaves.Using this option you can for example run the X11 version (x-pvmpov) as master (to display the results) and a version without any display support (pvmpov) as slaves.
Set the working directory for the slaves. By default PVMPOV tries to run the slaves in the same directory, as the master. But sometimes 'getcwd' gives misleading output (when automounting is used), or you simply want to run the slaves in some other directory, then you can use this option.
Set the names of the hosts to use for slaves. Note, that there are no spaces allowed between the names.By default PVM distributes the processes in some order to the available machines. Sometimes the choice of PVM is not the best, so you can specify explicitly which machines to use. You can use more tasks, then you specify here. You can use the options in this way: “pvm_hosts=darkstar,darkstar,baby +NT6”. This will start six tasks (“+NT6” must come after the “pvm_hosts”), four on darkstar and two on baby.
Copyright © 1994-2003 PVMPOV Team.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation.