FlowViewer FAQs

   1. The v3.0 package is different. Why?
   2. With v3.0 my bookmarks don't work Why not?
   3. I start a report, but nothing comes back. Why not?
   4. I start a report, but nothing comes back. Why not? Part II
   5. I get a report back, but it has no data. What's up?
   6. I get a report back, but it has no data. Part III
   7. I get a report back, but it has no data. Part IV
   8. The settings are correct, but it still has no data. What's up?
   9. On long queries the browser seems to 'time out.' Why?
  10. FlowViewer works, but is slow. Why?
  11. FlowViewer stops unexpectedly, general unspecified problems, weirdnesses?
  12. Will new versions of FlowViewer mess up my existing trackings?
  13. Why do I sometimes get "*** attempt to put segment in horiz list twice"?
  14. I'm having problems and I'm running on a 64-bit system. Any known issues?
  15. I want to change netflow formats, any problems?
  16. FlowTracker is not letting me create Groups
  17. I'm seeing: flow-cat: Warning, partial inflated record before EOF
  18. I'm getting: "Must select a device or an exporter.", but I'm not using devices
  19. Does FlowViewer support IPFIX or netflow v9?
  20. FlowViewer takes a long time to complete. Why?
  21. The FlowTracker input screen is blank. Why?
  22. FlowGrapher will not generate a graph. Why not?
  23. flow-capture starts, but is not writing files. Why not?
  24. Why are the embedded links to Trackings not lining up on Group graphs?
  25. I point my browser to FlowViewer, but only see broken image symbols. Why?
  26. What is a good way to set up flow-tools?
  27. I'd like to replicate flows to another host. How do I do that?
  28. No FlowGrapher graphs. HTTP error_log: Illegal division by zero at .../axestype.pm?
  29. Sometimes FlowTracker_Collector takes more than 5 minutes, and freezes. Why?
  30. A FlowTracking name got messed up and I can't remove it. How can I delete it?
  31. FlowViewer returns nothing for Prefix reports (e.g., Source, Dest prefix, etc.)


1. The v3.0 package is different. Why?
Version 3.0 introduces FlowTracker, but also provides an improvement that several users requested. They were tired of inputting the day and time with each invocation of FlowViewer or FlowGrapher. The new architecture does away with create_FlowViewer_webpage and create_Flowgrapher_webpage and has the user point his browser instead to FlowViewer.cgi or FlowGrapher.cgi. Now the start and end times are pre-filled according to how you would like it by the start_offset and end_offset parameters in the FlowViewer_Configuration.pm file.


2. With v3.0 my bookmarks don't work. Why not?
See FAQ #1 above. The structure of the scripts has changed and now the user should point the browser (and make a bookmark) to FlowViewer.cgi, FlowGrapher.cgi, and now FlowTracker.cgi instead of /htp/htdocs/FlowViewer/index.html, etc. (or however your http environment was set up.)


3. I start a report, but nothing comes back. Why not?
This could be caused by your web server CGI settings. Examine the httpd.conf file to make sure that the web server is set up to execute CGI. Make sure that the FlowViewer_ Configuration parameters $cgi_bin_directory and $cgi_short are set correctly with respect to your web server environment. Typically, the cgi-bin directory is aliased. Here is an example from Apache: # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/htp/cgi-bin/" In this case, provided that the contents of FlowViewer package now resided in the /htp/cgi-bin/FlowViewer_3.0 directory, the relevant parameters and settings would be: $cgi_bin_directory = "/htp/cgi-bin/FlowViewer_3.0"; $cgi_bin_short = "/cgi-bin/FlowViewer_3.0"; And, as always, make sure that all relevant directories have been created and permit the web-server process to write into them. This includes the 'reports', 'graphs', 'tracker', 'names', 'work', and 'log' (if you're logging) directories. The following can help you get started. Afterwards you can tighten things up as you want. From the $cgi_bin_directory issue a 'chmod -R 0777 *' From the $flow_data_directory issue a 'chmod -R 0777 *' From the $reports_directory issue a 'chmod -R 0777 *' From the $graphs_directory issue a 'chmod -R 0777 *' From the $tracker_directory issue a 'chmod -R 0777 *' Turn on debug ($debug_viewer = "Y";, etc.), make a run, and examine the DEBUG_VIEWER output. The output will have the text of the flow-tools command that was created. Cut and paste this command to a command prompt, run the command, and review the results. This may give you a clue to what is happening. You can also simply run FlowViewer.cgi, FlowGrapher.cgi, or FlowTracker.cgi from the command line. This may provide a good hint. For example: 'cannot mkdir /var/www/FlowGrapher_3.2/: Permission denied at FlowGrapher.cgi line 58.' This would mean that you have to loosen permissions on /var/www, or create the subdirectory yourself with adequate permissions (e.g., 0777).


4. I start a report, but nothing comes back. Why not? Part II
Perhaps you haven't created the directory pointed to by $work_directory. This would prevent processing from completing.


5. I get a report back, but it has no data. What's up?
Make sure the FlowViewer scripts are reading flow-data from the correct directory. FlowViewer will look for flow-data according to three settings in the FlowViewer_Configuration.pm file. These are:
a. $flow_data_directory b. @devices c. $N For example, here we track netflow data from several devices using the default flow-tools nesting value. Our file structure looks like: /htp/flows/ecs_edc/2006/2006-01/2006-01-19/ft-v05.2006-01-19.000001-0500 <-- a --->|<- b ->|<---------- c -------->|<-- actual flow-data file --> In this case a) '/htp/flows' is our flow_data_directory, b) 'ecs_edc' is one of our devices, and c) the three levels of nested date-ordered directories are addressed by setting $N = 3 (the FlowViewer default.) Note that $N can be confusing because the flow-tools documentation indicates that -N0 is the default, but if you do not put a '-N' modifier on your flow-capture statement, it will behave as if -N3 has been set.
In our FlowViewer_Configuration.pm, the variables are set as follows: $flow_data_directory = "/htp/flows"; @devices = ("ecs_edc","router_1","router_2","router_3"); $N = 3; Also, verify that the flow-tools are in the $flow_bin_directory you have specified. This can be accomplished by, e.g., 'which flow-stat';


6. I get a report back, but it has no data. Part III.
Another possibility for this problem is that the timestamps on the flows are not what you are expecting, and hence the data is completely filtered out. For example, you may wish to see everything from 10:00:00 to 11:00:00 but the report is empty, and you're sure you have data because there are plenty of non-zero sized ft... files in your flow-data directory. It may be that the flows are time stamped quite differently from the file timestamp. In this case a simple "flow-print -f5 < ft-v05.2006-01-19.100001-0500" will list the flows with embedded time stamps. The output could be long so you might want to redirect it to a file first. Compare the flow timestamps to what you are expecting. If they are off - then perhaps your router's time setting is off, or your computer time setting is off.


7. I get a report back, but it has no data. Part IV.
In the situation where you generated a large FlowViewer or FlowGrapher report you may have generated a temporary intermediate file (e.g., /tmp/FlowGrapher_output_070406) that exceeds the amount of space available to the partition that holds your working directory (e.g., you used up all of /tmp space.) To fix this, remove the offending file, and either run a smaller report, or increase the size of your working directory, or move it to a directory on a larger partition.


8. The settings are correct, but it still has no data. What's up?
Another possibility for an empty report is that the web server (e.g., Apache) that is running the CGI scripts does not have adequate permission to read from the flow-data directory or files. Review the permissions of the flow-data directories and files to make sure they are 'open' enough. Make sure that Apache can get access to the flow-tools specified by the $flow_bin_directory parameter. The following can help you get started. Afterwards you can tighten things up as you want. From the $cgi_bin_directory issue a 'chmod -R 0777 *' From the $flow_data_directory issue a 'chmod -R 0777 *' From the $reports_directory issue a 'chmod -R 0777 *' From the $graphs_directory issue a 'chmod -R 0777 *' From the $tracker_directory issue a 'chmod -R 0777 *' If you are running a version of Security Enhanced Linux (SELinux), verify that there are no file or directory access controls that are preventing Apache from accessing either the flow-data directory and files, or the flow-tools themselves. Or you could disable SELinux functionality: In /etc/selinux/config file, set SELINUX=disabled.


9. On long queries the browser seems to 'time out.' Why?
When you have requested a time period that requires the analysis of many flows, while flow-tools is cranking away no data is being sent to the browser. As a consequence, the connection drops. This closes the data path and no data is sent back to the browser.
Reset either the web server or web browser setting that is controlling this. For example, with Apache there is a timeout value that controls this and is set to 300 seconds. Adjust this to 1800 which will permit browser-to-server connections to stay open for 30 minutes.
Apache example, in the httpd.conf file: # # Timeout: The number of seconds before receives and sends time out. # #Timeout 300 Timeout 1800
Remember to stop/restart your web server in order to read the new httpd.conf settings.
Some have had to modify a similar setting on their browsers.


10. FlowViewer works, but is slow. Why?
Most likely the FlowViewer script is not taking advantage of the caching that the 'names' file provides. Make sure that your web server process owner (e.g., Apache) has adequate permission to write into the directory identified by the $names_directory parameter in the FlowViewer_Configuration.pm file. For example, set the permissions to 0777 for the $names_directory. Also, make sure that the permissions on the 'names' file itself are open enough for the web server process owner to write to the file. Note also that queries over long time periods cause the flow-tools flow-cat process to really crank through a lot of data for busy routers. So if you are looking at a busy device, you will get better response times for shorter queries.


11. FlowViewer stops unexpectedly, general unspecified problems, weirdnesses?
Permissions. Many problems are caused by restrictive file permission settings. This is particularly important with FlowTracker. With FlowTracker you have the web process owner (e.g., apache) taking care of creating, modifying, and deleting Trackings, but you may have a different user (perhaps your own account) starting and running the FlowTracker_Collector and FlowTracker_Grapher scripts. Inadequate permissions will stop things in their tracks. There are at least two ways out of this jam. The first is to set up and run everything as the web server process owner (e.g., apache); installing, creating directories, and executing scripts (e.g., FlowTracker_Collector) as that user. The other way out is to make sure all scripts, directories, subdirectories, files, etc. have permissions that permit the owner AND the web server process the equal ability to read, write, and execute all files, directories, scripts, etc. A good way to aid in this is to put both accounts in the same group, and provide the group with write permissions. You might have to reset the umask for each of these accounts.


12. Will new versions of FlowViewer mess up my existing trackings?
No. Care has been taken to preserve existing Trackings with new versions of FlowViewer including new versions of FlowTracker_Collector and FlowTracker_Grapher. After configuring the FlowViewer_Configuration.pm file for your environment, and making sure that the $filter_directory and the $rrdtool_directories contain the existing filters and RRDtool databases, the user can simply 'kill' the running versions of FlowTracker_Collector and FlowTracker_Grapher, and start up the new versions.


13. Why do I sometimes get "*** attempt to put segment in horiz list twice"?
Occasionally FlowTracker_Grapher will output this error message into the shell it was launched from. The best I can tell this is caused by a bug in some old versions of librsvg (or similar) which fails to cope with some SVG images during RRDtool's generation of the graph. It appears to be harmless.


14. I'm having problems and I'm running on a 64-bit system. Any known issues?
This is fixed in the new 'forked' version of flow-tools found here: flow-tools v 0.68 If you are installing the previous version, the following applies: Yes, some 64-bit users are having problems. The best I can tell at this point is there is a bug in flow-tools when deployed on a 64-bit platform. There are three solutions that I'm aware of. The first is a patch to flow-tools by Mike Hunter:
flow-tools 64-bit patch 1
The second is a more extended patch (Paul Komkoff Jr.) that uses ia temporary variable:
flow-tools 64-bit patch 2
The third approach (Ryan Gerdes) was to use binaries for key flow-tools components: flow-cat, flow-print, flow-nfliter, and flow-stat compiled for the 32-bit version of the OS.


15. I want to change netflow formats, any problems?
The flow-tools flow-cat process does not concatenate across varying netflow type boundaries. That is, if you run a FlowViewer report that includes v5 and v7 data (for example) no report will be generated. If you use the DEBUG feature, cut and paste the flow-run command string onto a command prompt, and run it, you will get the following error message:
flow-cat: data version or sub version changed!
flow-tools will work on either type by itself, so as long as you confine the requested time period to one or the other, you'll be OK. Or you can have flow-tools store the v7 data as v5:
flow-capture -V 5 0/0/*** -w /blah/blah/blah
As far as losing any data, according to Mark Fullmer:
"You'll lose the router_sc field. AFAIK unless there are multiple routers providing shortcut paths to the switching module this field will never change."
Flow-tools mailing list email that discusses this, including how to use flow-xlate to merge:
Version change discussion


16. FlowTracker is not letting me make Groups
If the FlowTracker Group page appears but there is no sample graph at the top, or you receive an "Internal Server Error" (most likely Perl compilation problem, it could be that you haven't correctly installed RRDs.pm. A quick way to check for this is to issue a 'perl -c FlowTracker_Group.cgi' from a command line. If there is a RRDs.pm location problem the script will not compile. This problem can be tricky and I will try to make it easier in the next version. In the meantime the easiest way to fix this is: 1. Do a 'perl -V' from a command line, and look at the @INC array @INC: /usr/lib/perl5/5.8.5/i386-linux-thread-multi /usr/lib/perl5/5.8.5 /usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi ( ... more ) 2. Identify the most likely directory into which to put a copy of RRDs.pm probably: /usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi 3. Copy RRDs.pm into that directory from: /usr/local/rrdtool-1.2.26/lib/perl/5.8.5/i386-linux-thread-multi/RRDs.pm to: /usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/RRDs.pm 4. Copy the RRDs and RRDp 'auto' subdirectories and their contents into the Perl 'auto' subdirectory from: /usr/local/rrdtool-1.2.26/lib/perl/5.8.5/i386-linux-thread-multi/auto/RRDp to: /usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/RRDp from: /usr/local/rrdtool-1.2.26/lib/perl/5.8.5/i386-linux-thread-multi/auto/RRDs to: /usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/RRDs Note: the above can be accomplished using links instead of copying. Another thing to watch for is using a special character in the FlowTracker 'Tracking Set Label' text box. This field is used to create a file-name and is allergic to many special characters.


17. I'm seeing: "flow-cat: Warning, partial inflated record before EOF"
This error message may indicate that you are trying to read an empty directory. This error would appear for Exporter users in the very initial release of version 3.3. This was was caused by FlowTracker_Collector trying to read a device_name directory even though the user was not using devices. This was fixed in version 3.3.1. This error may also occur during the processing of data in normal directories. It is not understood at this point why this happens, however it appears to be mostly harmless. I have seen it occur on every third FlowTracker_Collector run (every 15 minutes) which coincides with the end of a typical 15-minute flow-tools ft file.


18. I'm getting: "Must select a device or an exporter.", but I'm not using devices
This would happen for early users of version 3.3 if they were not using devices or exporters. This was fixed in version 3.3.1.


19. Does FlowViewer support IPFIX or netflow v9?
No. Not at this point. The engine that runs FlowViewer, flow-tools, has not been modified to accept version 9 netflow packets. Not sure whether the current maintainers of flow-tools (i.e., Paul Komkoff, et. al.) will be making this upgrade or not. (I hope so :-)


20. FlowViewer takes a long time to complete. Why?
This could be because your environment does not have a properly working DNS resolution capability. FlowViewer (and FlowGrapher) default to "Y" for Resolve Addresses on the input screen, so FlowViewer is attempting to resolve each IP address, there is no resolving capability, and it takes a while to complete this task. You should set this field to "N". You could modify the FlowViewer.cgi and FlowGrapher.cgi scripts to select "N" instead of "Y" if you like. I will try to put such a switch in the next version.


21. The FlowTracker input screen is blank. Why?
This could be caused by uncreated directories for $filter_directory, or $rrdtool_directory. Create these directories and provide them with adequate permissions to solve this problem.


22. FlowGrapher will not generate a graph. Why not?
This may be caused by a non-optimal installation of GD (and libgd). A good way to test this out is to issue a 'perl -c FlowGrapher_Main.cgi' from a command line. If you get the follwing message, or something like it, you have probably installed the GD components in a location that Perl is not familiar with. "Can't load '/usr/lib/perl5/site-perl/5.8.5/i386-linux-thread-multi/auto/GD/GD.so' for module GD:libgd.so.2: cannot open shared object file. No such file or directory at /usr/lib/perl5/5.8.5/i386-linux-thread-multi/DynaLoader.pm line 230. at FlowGrapher_Main.cgi line 82." A soft link or a reinstall will help solve this.


23. flow-capture starts, but is not writing files. Why not?
Everything looks fine: flow-capture is running in the background, 'netstat -an' shows it listening OK on the specified port, and tcpdump shows netflow UDP packets arriving, yet no capture files are being created. Have you created the directory that the netflow data is supposed to go into? For example if your flow-capture command looks like this: flow-capture -p /var/flows/pids/flowtool.pid -w /var/flows/cwrouter_1 -E5G -S3 0/0/2050 you must manually create the /var/flows/router_1 directory and give the flow-capture process owner adequate permissions to write into the directory. Also, this could be caused by a host firewall blocking the packets from going into the TCP stack. Turns out that the firewall will stop the packets after tcpdump sees them. Simply adjust the firewall rules (e.g., iptables) to permit the netflow exports.


24. Why are the embedded links to Trackings not lining up on Group graphs?
It seems that newer versions of RRDtool handle the COMMENT command that produces a line break a little differently. It involves FlowTracker_Grapher only and is fixed in later versions of FlowViewer_3.3.1.


25. I point my browser to FlowViewer.cgi, but see only broken image symbols. Why?
This could be due to a number of things, mostly related to permissions. Make sure that the process owner for your web server (e.g., apache, www-data, etc.) has write permissions into the htdocs directory immediately above the $reports_directory. Sometimes this is the root htdocs file as defined in the httpd.conf (or with Debian, the apache2.conf file.) FlowViewer.cgi will try to create your $reports_directory if you haven't already created it, and it will try to copy the FlowViewer.png and User Logo graphics into the directory. The web server process owner must be able to write into the directory. Make sure you have manually created the $work_directory and that the web server process owner has write permissions into it. Same for the $names_directory. In general, make sure all directories defined in FlowViewer_Configuration.pm have been created and have write permissions for the web server process owner. To be very safe, do this manually ahead of time. Finally, make sure your browser (i.e., desktop IP address) can acccess the web server htdocs directory structure. Sometimes access controls may be blocking this.


26. What is a good way to set up flow-tools?
The flow-tools software is excellent. It is very stable and has great flexibility through so many options. With FlowViewer, however, the only component that you have to work with is flow-capture. FlowViewer will automatically invoke several of the other components for you. The man pages are very informative. I think this is the most recent version: flow-tools man pages A typical flow-capture command may look like this: flow-capture -p /var/flows/pids/flowtool.pid -w /var/flows/router_1 -E5G -S3 0/0/2050 In the case above, I am storing netflow data separately for each device instead of collecting from multiple exporters into a single directory structure. You can see this by the fact that I have identified the directory using the name of a single device, 'router_1'. I would have a second, similar command for a second device (e.g., 'router_2') where the only difference in the command syntax would be to replace 'router_1' with 'router_2" and to increment the receiving port number from '2050' to '2051', say. I would execute both commands and have two flow-captures running simultaneously. Actually, here at NASA GSFC, I'm running 23 flow-captures simultaneously. Each one takes a surprisingly little amount of CPU, with four of them receiving from very busy devices. The -p parameter identifies a directory where flow-capture will store the process identifier (PID) for the flow-capture process. The -w parameter identifies the location for depositing the netflow data. The -E parameter identfies how much disk space (5 Gigabytes) should be allocated to this collection, with flow-capture aging out netflow data once the limit is reached. The -S3 parameter informs flow-capture to write a status message to the log file (generally e.g., /var/log/cflowd.log) every 3 minutes. The 0/0/2050 notation informs flow-capture to expect netflow data from any device IP address (use of '0') and to capture it with any destination IP address. These can be specific IP adresses as well. The UDP port number for receiving packets from the device is 2050. At this point you are ready to modify the @devices field in the FlowViewer_Configuration.pm file to match the collection directory name (i.e., 'router_1') and you are ready to go. If you wish to collect from multiple exporters, all exporting to the same UDP port, your flow-capture syntax might look like this: flow-capture -p /var/flows/pids/flowtool.pid -w /var/flows/all_routers -E5G -S3 0/0/2050 In this case you would set up the following relevant parameters in FlowViewer_Configuration.pm: $exporter_directory = "/var/flows/all_routers"; @exporters = ("192.168.100.1:New York Router","192.168.100.2:Prague Router"); Finally, you may simply collect all netflow data (from one or more devices) into a single directory structure and not use named devices or exporters. The flow-capture command might look like: flow-capture -p /var/flows/pids/flowtool.pid -w /var/flows/all_flows -E5G -S3 0/0/2050 In this case you would set up the following relevant parameters in FlowViewer_Configuration.pm: $exporter_directory = "/var/flows/all_flows"; $no_devices_or_exporters = "Y"; If you are having having problems capturing netflow data, see FAQ #23 above.


27. I'd like to replicate flows to another host. How do I do that?
Sometimes it is useful to be able to replicate a netflow stream coming to your normal capturing host on to another host. The flow-tools flow-fanout tool will do this. I, and others, have found the example on the flow-fanout man page to be confusing. So, after playing with it for awhile, the following command seemed to do the trick (also thanks to Victor Wiebe): flow-fanout -s -V5 -S3 -p/var/flows/pids 0/0/2095 0/127.0.0.1/2195 0/192.168.100.10/2095 In the above example, any flows received at the capturing host on port 2095 will be replicated to the local host (127.0.0.1) on port 2195, and a new stream reflected to host 192.168.100.10 and received there on port 2095. The -s parameter will wind up substituting the exporter IP address as the source address on packets sent to the local host and to the reflected host. The -V5 ensures that all reflected PDUs continue to be in version 5 format. You may need to change this for other versions. The -S3 parameter tells flow-fanout to record status messages every 3 minutes. You'll probably need 'root' to be able to get flow-fanout going as it requires the ability to open a socket via the 'setsockopt' command. Once the replication is working, you would need to start up a flow-capture on the local host which is listening on port 2195, and a flow-capture on the reflected host listening on port 2095. I think what makes the man page examples confusing are two things. The localip and remoteip fields make different sense in different contexts depending on whether the 'triplet' in question is the orginal capturing host, the local host, or the reflected host. Also, the example provided receives packets on port 9500, and then resends them to the local host on this same port (and to port 9200 on the reflected host.) When I tried that I wound up sending an endless loop of packets to the remote device as the replicator was essentially listening to itself, caught in a feedback loop.


28. No FlowGrapher graphs. HTTP error_log: Illegal division by zero at .../axestype.pm?
This could be caused by a number of variants around the functioning of GD::Graph. In some cases GD::Graph has not been installed quite properly and a re-install did the trick. In another case, version 1.43 had a bug, and an install of verion 1.44 fixed the problem. Also, there may be an inconsistency in how GD::Graph, GD, and fonts (particularly FreeType fonts) interact.


29. Sometimes FlowTracker_Collector takes more than 5 minutes, and freezes. Why?
It may happen that the host is overburdened sometime and FlowTracker_Collector winds up taking more than 5 minutes (300 seconds) to complete. For me, this normally has not been a problem, as it simply starts back up for the next period immediately after the long period completes. This has worked quite well on Red Hat Linux. Recently we upgraded our hardware and also switched to a Debian OS. Normally here FlowTracker_Collector will complete somewhere around 60 seconds for our 124 trackings. However, recently at midnight we've experienced an excessive use of system resources, and as a consequence, FlowTracker_Collector would take more than 300 seconds to complete. I was quite surprised to see that even though the FlowTracker daemon was still running, the collection was not taking place at all. FlowTracker_Collector times itself and determines how long it should sleep before running again at the next 5-minute mark. It turns out the Debian Perl 'sleep' function hangs on values less than zero. Red Hat Perl would convert these to zero and continue happily along. Of course, it could be different versions of Perl causing this, but I haven't got that far in post-mortem analysis.


30. A FlowTracking name got messed up and I can't remove it. How can I delete it?
FlowTracking names are allergic to special characters. It might be the case that you have created a FlowTracking with a label that contains such a special character. In fact, the label got so messed up that the FlowTracker web page 'Remove' link won't actually remove the FlowTracking. It is easy to remove any Tracking by manually removing both the filter file from your $filter_directory, and the RRDtool file from your $rrdtool_directory. At that point all trace of the Tracking is gone.


31. FlowViewer returns nothing for Prefix reports (e.g., Source, Dest prefix, etc.)
This may be caused by the type of netflow data that you are receiving. If you run the debug generated FlowViewer command string (e.g., from Flow_Working/DEBUG_VIEWER) from a command line, you may receive the following message from flow-tools: # --- ---- ---- Report Information --- --- --- # # Fields: Total # Symbols: Disabled # Sorting: Descending Field 3 # Name: Destination Prefix # # Args: /usr/local/flow-tools/bin/flow-stat -f25 -S3 # flow-stat: Flow record missing required field for format.