To deploy your project, copy the prcf_server.tar.bz2 or prcf_server.zip file that you created in Project Distribution to the computer which will be the main server for your public resource computing project. If your server is a UNIX-like system, you should use the tar.bz2 file so that the scripts will have the correct permissions set. Choose a convenient directory for your public resource computing project, and uncompress the distribution file there. Start the server by running the appropriate start_server script for your platform. The server will output status information to the console, and you can also monitor its progress by inspecting the server.log file in the log directory. If your platform is Windows, and the Windows firewall asks you whether you want to block the Java program from accessing the network, choose to unblock it. As explained in Project Distribution, the framework does not provide any way to start your other server components, the work unit generator and result validator, because these components might be run on different computers. At this time you should start your server components, which should be located in the server directory. If your project uses a remote transitioner, you can use the start_remote_transitioner script to start that component. Your project should now be ready to accept client connections. Please be aware that if you need to shut down the project server, you must use the stop_server script. Failing to do so may cause some or all of your data to be lost. If you execute the stop_server script, and several minutes later the server has not shut down, there may be a problem with one of the XML-RPC connections to your server that is preventing the shutdown from completing. In this case you may need to execute the emergency_shutdown script. This script will shut down the server immediately, without waiting for all client transactions to complete. The emergency_shutdown script should only be used as a last resort to shut down the server because it may cause inconsistency in the database if a transaction has begun but has not yet completed.
If your project uses an HSQLDB database, and you need to view the contents of the database directly, you can use the hsqldb_client script that is included in the server distribution file. First, you will need to shut down the project server using the shutdown_server script. It is only necessary to shut down the server to view the contents of an HSQLDB database. All other types of databases should allow simultaneous access. After the server has been shut down, run the hsqldb_client script, which will start the HSQLDB client. Go to the File menu and select Connect. This action will display a connection dialog. The only field you need to modify is the URL field. It should contain the string jdbc:hsqldb:file: followed by the path to your HSQLDB database. For example, the URL might be jdbc:hsqldb:file:data/prcdb. After you connect, you will be able to execute SQL queries in the window at the top right of the HSQLDB client.
The volunteers who will contribute their computers' resources to the project should download the project_client.tar.bz2 or project_client.zip file. After uncompressing these files, they should use the start_client script to start the project client. The first time the project client is executed, it will prompt the volunteer to enter his or her desired user name and a valid e-mail address. That information is then stored to the cfg/client.properties file, where it will be read on subsequent executions of the project client.
At some point you will want to access the result data that was computed by the science application. The server distribution file contains a script called extract_results for this purpose. When the extract_results script is run, you will be prompted for your project password. This script is password protected because it can cause a high network and database load, so it is more highly susceptible to a denial-of-service (DOS) attack. This script connects to the project database and retrieves the data for all results, and then writes this data out to disk. A folder called results will be created, which is the base directory for all results. Inside that directory, directories will be created for each work unit, and will be named by the ID of that work unit. Inside each work unit directory, a file called result.dat will be created. This file contains the canonical result data for that work unit.