dataflow object inspector (nodes)
object detail: | (history list) |
dataflow developer tools
Examine Is the primary 'inspection' tool. Instead of inspection scripts/functions, document elements, etc., it inspects the components of dataflow –
Use the 'dataflow engine' sidebar at the left to browse through the tree of dataflow components. Click on the icon of a component to expand/collapse its sublists. Click on the name of a node (or epilogue node) – all of its dataflow properties (inputs, outputs, implementation function, data rules, attachments, current value, etc., are then shown in this main 'detail' panel. [Note the history list in the menu/navigation bar - this lets you switch between the items you have examined recently.] Interaction with other Tools Whenever you click on the name of a node (or epilogue node) on any of the other dfirefly tools, that node is added to the list of nodes being examined in the the Examine panel. |
document connections
connections: | sources | attachments | streams | clients | servers | refresh |
overview | details |
Document Sources
— components —
note: click an object (or ) to expand/collapse tree
Document Attachments
— components —
note: click an object (or ) to expand/collapse tree
dataflow real-time monitor
monitor | view: | dataflow | project |
overview | watch list |
Dataflow View:
— components —
note: click an icon to follow dataflow tree
click a node's name to monitor it
Browse
the dataflow event trees from the –components– list
(at left) to find the node/datastream to monitor.
Then –
Click on the name of a node to show its current data, above — then
Click on [watch] to begin active monitoring.
Alternatively, examine
browse the project view (the module structure as-written) in the examine tool. Select a node there (click on its name) — its detail is displayed there, and its data summary is displayed above.
Click on [watch] in either the examine tool or watch tool – the node is added to the watch list.
Switch back to watch to follow active monitoring.
For additional information, click on the 'overview' menu item in the monitor menu bar.
watchlist | active | ? |
dataflow thread debugger / breakpoint tool
debug | view: | dataflow | project |
overview | breakpoint list/detail |
Dataflow Engine / Dataflow View
— components —
click an icon to navigate dataflow tree
click on a node name to set a breakpoint
|
Browse
the dataflow event trees from the –components– list
(at left) to find the node/datastream to monitor.
Click on the name of a node to show its current data, here — then
Click on [watch] to begin active monitoring.
Alternatively, examine
browse the project view (the module structure as-written) in the examine tool. Select a node there (click on its name) — its detail is displayed there, and its data summary is displayed above.
Click on [watch] in either the examine tool or watch tool – the node is added to the watch list.
Switch back to watch to follow active monitoring.
For additional information, click on the 'overview' menu item in the monitor menu bar.
promises, servers and component dataflow, oh my!
streams and dialogues: |
clients | streams | proxies |
overview | details |
— components —
— components —
-
show streams div
— components —
-
show promises div
dataflow control/manipulation console and log
console: |
view → | parms | inputs | nodes | streams |
overview | details |
— components —
i.e., a document element (event), another module, window, or an extension
Browse
the dataflow event trees from the –components– list
(at left) to find the node/datastream to monitor.
Click on the name of a node to show its current data, here — then
Click on [watch] to begin active monitoring.
Alternatively, examine
browse the project view (the module structure as-written) in the examine tool. Select a node there (click on its name) — its detail is displayed there, and its data summary is displayed above.
Click on [watch] in either the examine tool or console tool – the node is added to the watch list.
Switch back to config to follow active monitoring.
For additional information, click on the 'overview' menu item in the monitor menu bar.
parameter / constant
event: DF_ONNEVERMORE
modify
select data:
initial value | |||||
current value | |||||
new value |
parameter / reference
event: DF_ONNEVER
modify
select reference:
initial value | |||||
current value | |||||
assign reference to: | |||||
module | |||||
stream | |||||
window | |||||
id | |||||
type |
parameter / ondemand
event: DF_ONDEMAND
modify
select input:
initial value | --- | |||||
current value | --- | |||||
retrieve: | ||||||
new value – | module | |||||
stream | ||||||
window |
injector
source event: | DF_ASYNC
|
output event: | DF_ALWAYS
|
modify
select input data:
initial value | ||||||
current value | ||||||
new value | ||||||
inject reference to | ||||||
element | ||||||
module | ||||||
stream | ||||||
event-based data source: | ||||||
source(s) |
---
|
node / computational
input event: | DF_ASYNC
|
output event: | DF_ALWAYS
|
modify
select output data:
initial value | |||||
current value | |||||
new value | |||||
reference to | |||||
element |
node / computational
input event: | DF_ASYNC
|
output event: | DF_ALWAYS
|
modify
select output data:
initial value | |||||
current value | |||||
new value | |||||
reference to | |||||
element |
node / state-feedback
input event: | DF_ASYNC
|
output event: | DF_ALWAYS
|
modify
select output data:
initial value | |||||
current value | |||||
new value | |||||
reference to | |||||
element | |||||
module | |||||
stream |
multiplexer
input event: | DF_ASYNC
|
output event: | DF_ALWAYS
|
modify
select data:
arbitrary value | |||||
reference to | |||||
element | |||||
module | |||||
stream |
epilogue / computational
input event: | DF_ONPARENT
|
output event: | DF_ONFIRED
|
modify
select output data:
current value | --- | |||||
arbitrary value | ||||||
reference to | ||||||
module | ||||||
stream | ||||||
element |
epilogue / selector
input event: | DF_ONPARENT
|
output event: | DF_ALWAYS
|
modify
select output data:
current value | --- | ||
arbitrary value | |||
inherited inputs
| |||
reference to | |||
module | |||
stream | |||
element |
epilogue / format
input event: | DF_ASYNC
|
output event: | DF_ALWAYS
|
modify
select output data:
current value | --- | |||||
new value – | string/number | |||||
epilogue / format
event: DF_ONFIRED
modify
select data:
current value | --- | |||||
new value – | string/number | |||||
Daemon Multi-tasking Node
input event: | DF_ONPARENT
|
output event: | DF_ALWAYS
|
modify
new value
dataflow graph/connectivity consistency analysis
dataflow compiler: |
lint | datastreams | event-rules | unresolved |
overview | details |
— components —
lint/dataflow graph consistency analysis
Click on [lint] to generate the lint report for the _global DFE.
Click on [unresolved] to generate a report of unresolved references.
lint analysis: | |||
bindery/node dataflow input summary
dataflow bindings: | |||
compiler/compound input event rules mappings
rules map: | |||
dynamic-linker/unresolved datastream references
link analysis: | |||
map/graph dataflow clusters & relationships
dataflow map: |
report | query |
overview | details |
dataflow analysis – mapping dataflow clusters
Click on [report] to generate the cluster analysis and map of the _global DFE.
dataflow graph: |
generate |
overview | details |
dataflow graph analysis
irefly/desk reference
detail references
promises, servers and component dataflow, oh my!
dataflow sources and sinks
examine dataflow node properties and datastreams
Use the Examine tool to inspect node details – its properties, implementation function and filters, its input and output datastreams, and current values, its event rules and event statistics. The Examine tool is available in both dfirefly's Main and Detail Panel windows. Click on the clone icon in the menu bar to open the second dfirefly window. This is more than an 'undocking'. When a node is inspected in the Main window, a copy is displayed in the clone's Examine tab. The second copy can be used to great advantage. Any of the other tool tabs may be open in the Main window – monitor, debug, linkage, etc., while the Detail Panel window is open to inspect a node selected in one of the other tools. Each Examine tool can inspect a different node. Pin the current node to lock the Detail Panel at the current node. The Main Examine tab may then inspect another node, or re-display a node's details from its history list. Un-pin the toolbar and the clone will resume inspecting nodes as they are selected in the Main window. Inspecting a Node The following is a 'live' sample of a node object detail –
| |||||||||||||||||||||||
dfGraphTaskList
| |||||||||||||||||||||||
node type | asynchronous - computational node | ||||||||||||||||||||||
module | _sdk | ||||||||||||||||||||||
isfired | asynchronous (any input becomes available) | ||||||||||||||||||||||
method/invocations |
| ||||||||||||||||||||||
attributes |
| ||||||||||||||||||||||
data:
| |||||||||||||||||||||||
input datastreams:
| |||||||||||||||||||||||
post processing:
| |||||||||||||||||||||||
output datastreams:
| |||||||||||||||||||||||
Selecting a Node to Inspect When the name of an object is clicked in the linkage, watch, debug, network, etc. tools in the Main Panel, details about that object (node, epilogue node, ...) are displayed here. Notice the navigation set in the menu bar, above. The navigation bar maintains a history of the objects that have been inspected. Use the icons to browse objects in the history list. Clicking the icon at the end of the navigation set clears the history – except for the current entry. |
dataflow developer tools – dataflow monitor
Good Morning, Dave. †
"I've just picked up a fault in the AE35 unit. It's going to go 100% failure in 72 hours. I am certain of it."
However, unlike Dave Bowman, you may monitor the Alpha Echo Three Five unit here –
or any other dataflow component.
[No potentially fatal extra-vehicular activity required.]
First,
The sidebar (below the monitor menubar) shows –
- dataflow view – [how data flows through the application]
Browse the components' dataflow event tree(s). Click an icon to follow how data flows – beginning with an input event – to the datastream you wish to watch.
Next,
Click on the name of the node/datastream to be monitored. Current status will be displayed. [Switch from overview to watch list if not already visible.]
Click [add to watch list] to begin active monitoring.
Alternatively, examine
Browse a module's nodes (the as-written structure - the project view) in the examine tool. When you select a node there, its summary is also automatically displayed here.
Switch back to watch then click [add to watch list] to begin active monitoring.
† – The Heuristically programmed AL-gorithmic 9000 computer never actually said, "Good Morning, Dave".
dataflow developer tools – dataflow breakpoint debugger
|
number of paused
dataflow threads
select
thread
step out
toggle all
continue
Dataflow
Breakpoints Menubar
Figure D1
Overview
Mouse-over the breakpoints menubar. Notice the
many similarities to a traditional Javascript debugger.
A traditional debugger can step through instruction sequences for us to deduce how/where data is changing. dfirefly, however, follows the data – 'stepping' through how data is transformed and propagated. Following the data with dfirefly involves several concepts –
- dataflow breakpoints and asynchronous dataflow threads
- breakpoint actions – basic actions at a dataflow breakpoint
- node actions – dataflow phases of a node
- dataflow 'step' comparisons – dataflow step next(phase), step over(phase), step in(dependents) ...
- dataflow event stack – information displayed when a breakpoint occurs
- managing dataflow breakpoints – set/clear, disable/enable
- synergistic Debugging – using dfirefly and developer tools together
Dataflow Breakpoints
A dataflow breakpoint can occur whenever dataflow is about to enter/exit a node. When a breakpoint occurs, dfirefly displays the dataflow event stack. aka a dataflow thread. A dataflow thread begins with input event/data (the injection node) and follows the propagation of that data/event until it reaches the breakpoint node.
At each node in the dataflow thread we can see the inputs to node/computation and the output to the next node in the thread.
Asynchronous Dataflow Threads
Dataflow breakpoints are asynchronous – only the current dataflow thread is paused at the breakpoint. The application itself is not paused as in a traditional debugger. There may be multiple dataflow threads paused at any moment. Each dataflow thread corresponds to a different input event and data propagation that has hit a breakpoint, See notes, below.
Breakpoint Actions
Breakpoint actions fall into two groups: Basic Actions and Node Actions. Basic Actions are analogous to traditional breakpoint actions, except that the apply to data. Node Actions relate to how data is transformed and then propagated by a node.
Basic Actions
The continue, step-next/into, and step-out basic actions apply to nodes and datastreams externally, not to their internals (data transformation and propagation phases). The show datastreams and toggle dataflow breakpoints actions apply to all breakpoints.
toggle | disables or enables all dataflow breakpoints. breakpoints remain in the list. |
show datastreams | converts the breakpoint list to a breakpoint and watch list. Each entry in the break list will also watch the nodes' output datastream. |
continue | continues execution until the next dataflow breakpoint is encountered. |
step (next/in) | continues and breaks at the next node to be 'fired'.
This finishes any processing for the current node. If the output datastream(s) flow to any epilogue nodes, any descendant nodes, or to any client modules, dflibg will pause dataflow at the first of those nodes, i.e., step-in. If this is a leaf in the dataflow event tree, then dataflow propagation of the parent/ancestor continues and pauses at the next sibling node, i.e., step-next. If there are no more siblings, dataflow propagation continues with the parent/ancestor's parent/ancestor, i.e., step-out, etc. |
step (out) | dataflow step-out finishes any remaining processing and/or data propagation for the current node – a) invoking the implementation function, b) applying an output filter and writing the result to any attached document objects, c) propagating a node's output to any remaining epilogue nodes, descendant nodes, and any local (non-asynchronous) clients. It then finishes the same for all of the remaining epilogues, descendants, and local clients of the node whose output event caused the current node to fire. |
Node Actions: Fire and Step-Over | |
These actions correlate to the Execution and Propagation phases of firing a node –
Execution The execution phase first marshalls a node's input datastreams, pausing if a breakpoint has been set. Execution then entails:
Dataflow Propagation With the raw value, the filtered (i.e., formatted) value, and/or the asynchronously-acquired object reference or server response –
They may thus be considered as dataflow extensions to the meaning of step-over | |
fire (+next) | advances to the next dataflow phase. When a breakpoint happens, the node is paused just before its implementation function is 'fired'. At this point, fire/next is similar to traditional step-over – it invokes the implementation function of the current node, pausing after the function returns with a new raw value. Subsequently, fire/next moves through validation and assignment, and then through the dataflow propagation phases. |
step-over (client modules) | continues, forwarding output to any clients (modules, threads, negotiated datastreams, windows, etc.)
and pausing before invoking any epilogue subtree or dependent nodes.
|
step-over (epilogue tree) | Continues, firing any nodes in an epilogue subtree, and pausing before invoking any dependent nodes (descendants). [It will invoke the implementation function if not yet done, forward datastream output to all client modules, threads, extensions, and negotiated datastreams if not yet done, etc.] |
step-over (all) | Continues, pausing after dataflow has been fully propagated from this node. |
Breakpoint Action Notes
Step-Next/Step-Out Comparisons
In a traditional debugger, step-next/in steps to the next instruction. If the next instruction is a function call, then it steps into the function. If there are no more instructions in the current function, then the function is returned and execution pauses at the next instruction after the call to the current function. In dataflow, we follow data propagation to the next node, and so forth, in a similar manner.
When the current statement is a function call, the traditional debugger's step-over steps over the function to the statement after the call. This 'steps-over' the function and any functions that it calls ad infinitum. In dataflow, we step over the node and any nodes to which its output is propagated.
For step-out, a traditional debugger finishes all of the steps (including subroutine calls) of the current function, and then returns to the caller, pausing at the instruction after the call to the current function. In dataflow, we finish all of the 'steps' of the parent/ancestor node and then 'return' to its parent/ancestor node's dataflow propagation, pausing at the next of its epilogues or descendants.
Asynchronous Behaviors / Out-of-Band Dataflow
When a node's output is propagated, thread, window and extension clients are invoked asynchronously. Any breakpoints in them will not fire until after the current dataflow event stack completes. This is also true for proxy nodes and the asynchronous response they receive from a server. The response begins a new, independent dataflow event propagation, and dataflow event stack.
Dataflow Event Stack
The dataflow event stack shows the active dataflow event tree – how data has moved through the application from the point of entry to reach the current breakpoint. The top of the stack is the injector node where data 'arrived' from an external source - the document, a thread, another window or extension, etc. The bottom of the stack is the current node.
Dataflow Event Stack
Figure D2
(working demonstration)
|
-
tabPageServer dfe/module - tabOnClick
→ DESCENDANT
- source datastreams from (9 document element(s))
div#panelMenu onClick .target div#watchNavBar onClick .target ... div#lintPanelTabs onClick .target - source datastreams from (1 negotiated datastream(s))
win$6977841411 onConnect loadACK
[object HTMLImageElement] - source datastreams from (9 document element(s))
- tabSelection
→ EPILOGUE
- input datastreams (1 predecessor(s))
[arg0] tabOnClick [object HTMLImageElement]
[object Object] - input datastreams (1 predecessor(s))
- tabSelection.1
→ DESCENDANT
- input datastreams: (3)–
-
inherited: tabSelectionparent [arg0] tabSelection [object Object] tabSelectionprogenitor (subtree root) [arg1] tabOnCLick [object HTMLTableCellElement]
-
[object HTMLDivElement] - input datastreams: (3)–
- tabDeactivateTab
→ DATA VALIDATION
-
proposed value[object null]current valueassertion: DF_ONVALID(assertion will fail)
- function () {...}
- implementation method:
0000 function (activetab) { 0001 if (typeof(activetab.value) == "string") 0002 return activetab.value; 0003 else 0004 return activetab.id+"Div"; 0005 }
- implementation method:
- input datastreams (2 predecessor(s))
dataflow phase-
[arg0] tabSelection [object Object] [arg1] tabSelection.1 [object HTMLDivElement]
- function () {...}
"tabselected" -
Each entry shows the node and its 'state' or firing 'phase'. Click on an entry to show the node's value (the value propagated to the next stack entry), and its input datastreams.
Phase may be one of –
FIRING | : | data marshalled, ready to invoke implementation function |
VALIDATION | : | raw/computed value ready to validate against output event conditions |
PROPAGATION | : | raw/computed validated; ready to assign/propagate/apply-output-filter |
CLIENT | : | forwarding to clients and/or negotiated datastreams |
EPILOGUE | : | invoking epilogue node (and subtree) |
DESCENDANT | : | propagating output to dependent nodes |
IDLE | : | finished |
Only the last entry (the current node, the breakpoint) will have a phase of FIRINGor DATA VALIDATION. All other entries will be in one of the propagation phases.
This is a fully functioning dataflow event stack example. Click the tabOnClick entry to see the event and data which began this dataaflow chain. Click the tabSelection.1 entry to see the epilogue's data inheritance, etc. Click on the inspection icon to view all of the node's datastreams and properties in the examine panel.
[Note, however, the inspection will only work if dfcom.tabPageServer module has been included in the application.]
Setting/Clearing/Pausing Breakpoints
|
Breakpoint List
Figure D4
Synergistic Debugging
Both dflibg d'firefly and the browser's Developer Tools can be used at the same time – d'firefly debugging the dataflow and communications, the embedded developer tools used for debugging the various node's internal implementation functions. In fact, you can set a Javascript breakpoint in Developer Tools, and still have d'firefly examine datastreams and interact with the application, while the application is paused at that Javascript breakpoint!
dataflow developer tools – dataflow console
The Dataflow Console
The dfirefly™/CONSOLE tool provides actions to control and manipulate dataflow. Use it to control the application's dataflow engines and modules – to test the application for various inputs/parameters, experiment with different dataflows, etc. With it one may modify and generate data –
- [modify] a parameter node (a constant value parameter)
- [input] a value to an injection node and thus its DFE/module
- [refer] a reference node to a different module or window
- [create] a stream (a conduit for negotiated datastreams) to another module, window
- [connect] the output of a node to another client (injection node, module, window, or negotiated datastream).
- [accept] a negotiated datastream from a datasource
- [attach] the output of a node to a document object
- [source] the input of an injection node to a document object/event
- [control] when and how output and output events are generated, or
- [push] data to and/or [fire] an arbitrary node.
Notice, this is a dataflow console – it is all about generating and managing dataflow and dataflow events. For a javascript console, where more options are available by directly entering dflibg object methods, see Developer Tools or Firebug™. For a full SDK, see dflibg emerald™ and follow the yellow-brick road.
The dfirefly™/CONSOLE tool
The dfirefly™/CONSOLE tool operates in two modes – report and query/browse.
Report
Creates several reports on the discovered dataflow structure – 'Subgraph Summary', 'Subgraph Intersection Analysis', and 'Subgraph and Cluster Maps'.
The 'Subgraph Intersection Analysis' shows all of the discovered subgraphs and any overlap/intersections between them; i.e., portions they may have in common.
The 'Subgraph Summary' report organizes the discovered subgraphs (analyzed and/or recommended node clusters) and their geometries.
'Subgraph and Cluster Maps' is the Graphviz dot language representation of the organized clusters.
Query/Browse
[under development] Analyzes a particular module (or the entire application) which analysis may then be inspected. The analysis may follow the fixed structure as programmed by the developer, or the dataflow event graph structure as discovered by dflibg dfirefly™.
dataflow graph connectivity/consistency analysis
The embedded dflibg library and the dfirefly SDK includes a suite of dataflow lint and compiler analysis tools. Where the dfirefly examine tool inspects individual nodes in detail, the lint and compiler/linker tools look at the project and/or module as a whole. They provide a dynamic view of what data flows between the various nodes, looking for implementation issues such as missing datastream connections, design issues; help verify data dependencies and events; flag as-yet unresolved references,; etc.
The suite includes:
- dflibg™ lint — dataflow consistency analysis
- dflibg™ dynamic linker — unresolved references summary
- datastream bindery analysis — datastream connectivity summary
- event-rule compiler analysis — compound input event rule summary
Usage
The lint/compiler tools typically work in two modes: query mode and report mode. To query an analysis/summary regarding a particular module (or node), select a node from the sidebar. Then click on the summary level desired (project, module, node, as appropriate).
If a printer icon appears, the tool supports printable reports. To view and print the report, click on the printer icon. The dfblig library in the tab/process being inspected will generate a printer-friendly report in a separate window.
lint
dflibg lint identifies inconsistencies in the dataflow relationships, along with possible issues to investigate or to examine and verify design intention, etc. dflibg lint lists
- orphans – nodes with no inputs
- dead-end dataflows – nodes whose output goes nowhere, attaches to no document element, is never forwarded to another module, server, etc.
- mismatched connections – inconsistent number of input datatreams, inappropriate or misused node types
- complex dataflow – dataflow cycles and loops, recombinant paths
--rewrite in progress --
Click on [lint], above, to generate the full lint report. The dfblig library in the process being inspected will generate a full (version 1) printable lint report on the application in a separate window.
unresolved
Click on [unresolved], above, to generate the full dynamic linker report of the node/datastream references that have not yet been resolved. The dfblig library in the process being inspected will generate a full (version 1) report in a separate window.
Lint Query [under development]
Click on [lint query], above, to query the library for 'lint-able' objects. These are displayed in the sidebar. Select an item from the sidebar, and then click on [lint this].
dataflow module/cluster analysis
What is MAP and why is it Important?
The dfirefly™/MAP tool discovers the actual dataflow design. It discovers what 'clusters' of nodes are 'closely' related – not because they were written in the same DFE or module, but because they are connected by dataflow.
The more a cluster (or a set of interconnected clusters) approximates the set of nodes in a DFE or in a module, the better the design and modularity of that DFE/module at design.
Applications where multiple inputs are factors in various computations will find that many of the clusters overlap. That is, a particular node will belong to more than one cluster. If there are a number of such intersections, then it is a good chance that the intersecting clusters should be organized into a module; individual clusters or tightly grouped clusters in the module should probably be organized into a dfe in the module.
The dfirefly™/MAP tool analyzes how nodes are connected to each other – that is, where data flows, what datastreams are connected to whom. It evaluates the dataflow design vz its organization.
It analyzes these connections to create the application's dataflow event tree. This is the graph of the application but where recombined paths are unrolled (much as a compiler may unroll a loop in imperative programming).
Where examine, lint, and link all inspect the application based upon the module/dfe organization and the nodes that belong to them, dflibg/MAP derives the clusters of nodes that are related by dataflow.
A cluster is always 'rooted' by an injection node, (type DF_INJECTOR). When data is sourced to/injected into the injector node, it is then propagated to its descendant nodes, and their descendants, etc., until it reaches a leaf node - a node that does not have any descendants (but may have clients in other modules, windows, etc.). The set nodes through which the injected value is propagated directly and indirectly is the dataflow cluster.
The dfirefly™/MAP tool operates in two modes – report and query/browse.
Report
Creates several reports on the discovered dataflow structure – 'Subgraph Summary', 'Subgraph Intersection Analysis', and 'Subgraph and Cluster Maps'.
The 'Subgraph Intersection Analysis' shows all of the discovered subgraphs and any overlap/intersections between them; i.e., portions they may have in common.
The 'Subgraph Summary' report organizes the discovered subgraphs (analyzed and/or recommended node clusters) and their geometries.
'Subgraph and Cluster Maps' is the Graphviz dot language representation of the organized clusters.
Query/Browse
[under development] Analyzes a particular module (or the entire application) which analysis may then be inspected. The analysis may follow the fixed structure as programmed by the developer, or the dataflow event graph structure as discovered by dflibg dfirefly™.
dataflow visual analysis
Dataflow Graph Report
The dfirefly™/GRAPH tool visually maps the _global DFE into a directed graph/set of directed graphs. It maps the dataflow structure as it was written.
Commands
generate | generates a dataflow analysis and graph representing the dataflow in the _global DFE
[in-progress: analysis and graph of a) the entire application, b) a selected component]
[generate] opens two windows: 1) the analysis, 2) the graph. [If possible, these will be separate windows. However, depending upon brower version and settings, they may open as new tabs in the main browser window - check the tabs list.] The analysis report is the Graphviz .dot language representation of the dataflow graph. Until a .dot language graph generator can be embedded into dfirefly, the graphical presentation is generated using the public server at www.dflibg.org |
For an application that has been fast-tracked from the bottom up, and not yet been organized into modules, this tool is especially helpful in visualizing the inherent clusters – and thus how to organize the application.
See Also
Use dfirefly™/MAP to analyze the dataflow event tree, identify clusters, overlaps/intersections, etc.
Summary: |
intro | examine | watch | debug | linkage | network | console | lint | map |
intro |
dataflow developer tools
Start
Choose a dflibg/js dfirefly developer tool from the icon menu above. More information about dfirefly and each tool is available in the full Welcome Overview, and in the individual tool overviews. Click on the options/settings icon, to configure when/how the overviews are displayed.
Use the left sidebar to browse either the dataflow component structure, or the dataflow/dataflow event structure (as appropriate for the selected tool). Each tool will display a menu bar of actions/options for the selected dataflow object or for the entire application.
Supplemental Panels
In addition – you may at any time –
- Click on the dataflow log icon, below, to toggle open/close the dataflow console log.
- Click on the dataflow console icon below, to input dataflow or modify input parameters.
dataflow developer tools – reference summaries
Organization
This is the Detail Panel of dflibg dfirefly dataflow developer tools.
Use the Main Panel to
browse dataflow components, set dataflow watchpoints, set/step dataflow breakpoints, etc.
Use the Detail Panel to —
- examine properties of node(s) currently selected in the Main Panel,
- browse the dflibg/dfirefly introductory help and reference guide, and
- concurrently examine the dataflow console log
When the name of an object is clicked in the Main Panel, details about that object are displayed here in the Detail Panel. This may be attributes, defined behaviors, dataflow definitions, current value, etc., of a node (where computation occurs) or datasreams (the data connections/events). [Clicking the browse icon of an object, e.g., , , or etc., shows/hides the contents or links of that container – its nodes, epilogue or child nodes, dataflow descendants ...]
For example, when examining attachments in the linkage tool, clicking on a name of an output node causes details about that node to be displayed. In the examine tool, with which you may browse modules, nodes, etc., details about a node are displayed in that tool's detail frame in the Main Panel.
Reference Set
Dataflow programming with dflibg™ is very different from traditional, imperative-style javascript programming. Therefor, both panels provide access to introductory help and reference resources. Click on the home icon in the menu bar to come back to these welcome pages.
Dataflow programming with dflibg™ is very different from traditional, imperative-style javascript programming.
dflibg-based javascript computations are completely data (and event) driven. A computation occurs when its data is 'ready' – i.e., available. Components of a dflibg-based application are nodes where computation occurs, datastreams between these nodes, and modules which organize related nodes.
In traditional, imperative javascript programming, computation occurs as directed by a sequence of steps – one operation following another, ... – not by when data is available. Frequently, in order to deal with 'when data is available', imperative javascript becomes 'call-back hell'.
As dflibg-based application programming is very different from traditional javascript for dHTML, so, too, must be the set of developer tools. Where traditional browser developer tools focus on instruction step sequences, functions, stack frames, etc., dataflow developer tools must focus on how data moves between the components. dflibg dfirefly has been created to meet that need.
dflibg™ dfirefly dataflow developer tools are intended to be used in concert with other developer tools – e.g., use Google Chrome™ developer tools, Mozilla Firebug™, etc., for inspecting resources and debugging the individual node's javascript implementation functions; use dfirefly for inspecting nodes, datastreams and events.
dflibg™ dfirefly includes a number of tools to assist in the development of dflibg-based applications, the illumination of dataflow design inconsistencies, and the debugging of the compiled dataflow application.
examine | is the dataflow equivalent of inspect {source}; instead of scripts/functions, it inspects the components of dataflow: the properties of nodes (including their implementation functions), and the dfe's and modules into which they are organized |
linkage | or dcomexamines the datastream connections between the document and the dataflow modules – the document sources of input and the attachments through which data is written to the DOM. |
watch | monitors dataflow in the executing machine; where an imperative tool 'watches' a variable for changes in value, dfirefly watches datastreams (node outputs). |
debug | is our old friend, the breakpoint tool; however, instead of breakpointing instructions, dataflow events are breakpoints – when a node is fired because data became ready. |
Each tool has its own welcome page, included in this set. Clicking the home icon in the menu bar will bring you back to these welcome pages. Browse with the nav bar, below, or select information pages, above.
Additional dflibg™ dfirefly tools dig deeper into the dataflow aspects of dflibg-based applications, the datastreams between components.
xcom | examines dataflow communications - not between nodes in the same module, but the datastream connections between modules, threads, frames, windows, extensions, and servers - i.e., promises, proxies, servers and component dataflow, oh my! |
network | identifies/monitors asynchronous communications through proxy nodes, nodes which send and receive from external servers in response to their input dataflow, and through surrogate nodes, nodes which create and manage objects (e.g., windows) in response to input dataflow |
console | allows one to 1) retrieve and follow the console log and the exceptions log issued by dflibg library, and 2) inject data into and/or configure dataflow components – in the 'inspected' window/page. |
lint | dflibg.lint dynamically identifies inconsistencies and ambiguities – orphans, black-holes, cycles, missing datastreams, etc. – as well as other dataflow design analyses – multipath analsys, etc. |
map/graph | while dfe and modules form the static design of a dataflow application, the dataflow design – the dataflow graph of the paths data follows from its injection into the dataflow engine until it is exported or written to a document – is analyzed via map/graph |
The next page describes some the panels; following that are the individual dfTool introductions.
A number of the tools use a page layout similar to the one you see now –
- a dataflow component tree (or graph) outline as a sidebar panel,
- a detail panel (the one with the text you are reading now)
- a menu bar (shown above) with the tool name (e.g., 'welcome:') and any actions/subcommands
Sample Application Component Tree
The component outline sidebar on this page shows a sample dataflow application - the dflibg™ dfirefly SDK. Indeed, this SDK/browser extension is itself a dflibg-based dataflow application! [The peer supporting modules embedded in the library of the 'inspected' application are also dataflow application modules. Communication between modules in this SDK and those peer modules also occurs via 'dataflow'!].
This SDK also makes extensive use of reusable dataflow modules – called Dynamic Server Modules. These modules drive the menu bars, the navigation bars, the hierarchical expand-collapse displays, and a host of other features. Hence, the SDK is a treasure trove of examples.
The next page includes credits and notes; following that are the individual dfTool introductions.
Legal Notes
All trademarks present in these pages are property of their respective owners. Use or inclusion of any is solely for the purpose of identification, and to illustrate how use of this SDK may integrate with the respective product or service.
A partial list: Google Chrome® is a registered trademark of Alphabet, Inc.; Firebug® is a registered trademark of the Mozilla Foundation; Graphviz developed at AT&T Bell Laboratories/AT&T Research, is now community supported.
In addition, dflibg™ library, dflibg dfirefly™ dataflow developer tools, and dflibg emerald™ dataflow software development kit, are trademarks of Chrysalis Systems (NJ) (supported at dflibg.org).
General Notes
The dflibg library, the dflibg dfirefly dataflow developer tools and dflibg emerald SDK are works-in-progress. While stable and most documented features fully functional, there are a few exceptions where a feature may not be complete. Attempts to note such exceptions are made here or in the documentation and/or sample demonstrations at dflibg.org. Unlike many projects, dflibg documentation is written before/during the software as a roadmap to development.
The following pages are the individual dfTool introductions.
Examine
Is the primary 'inspection' tool. Instead of inspecting scripts/functions, document elements, etc., it inspects the components of dataflow –
- nodes – where computation occurs – and
- their datastreams – where data and events flow between nodes and the outside world
In a dflibg-based dataflow application, related nodes are grouped into unnamed dataflow engines, dfes, and named dfe's, i.e., modules. Every application also includes a _global dfe for nodes not instantiated as part of a dfe or module. Use the Navigation Frame to browse these dfe's and modules to select a node and examine its properties and datastreams.
Navigation Frame
In the list of dfe's and modules, locate a node in the same way as you would drill-down a file directory tree to locate a file. The list in the examine tool's Navigation Frame is the Project View of the application.
In a Project View, nodes in a module or dfe are organized into several groups –
- parameter nodes – constants and references
- injector nodes – where data enters – and,
- computational nodes
When a node is selected, its detail properties, implementation function and statistics, datastreams (and their current values), etc., are shown in the detail frame. In addition, a node summary (name, type, current value) are shown in a header in the watch, debug, and console tools. Once selected, use the command buttons to a) addt the node to dfirefly's dataflow watchlist, b) add the node to the dataflow debugger's dataflow breakpoint list, or c) at any later point, refresh the properties and values display.
Watch
Monitors data in the executing dataflow machine; where an imperative tool 'watches' a variable for changes in value, dfirefly watches datastreams. Use watch to monitor specific datastreams that flow –
- into the dataflow engine – an input datastream to an injection node
- to other nodes – a node's output datastreams, and
- to document objects – a node's object datastreams
To monitor a datastream, we monitor to node that generates the datastream. To monitor an input datastream, we monitor the injection node to which it is connected.
So, to monitor data on a datastream, select a node from any navigator (examine panel, watch panel, debug panel, etc.). The node and its current output are displayed in the node header. [Complete node details are shown in the examine panel.] Click on the [ watch] button to add the selected node and its output datastream to the watchlist.
Selecting an injection node is a convenient way to monitor input events from document elements, dataflows from other windows, extensions, modules, threads, etc.
Other Ways to Watch Data
When a dataflow breakpoint is set, via any [ break] button, the node is added to the breakpoint list displayed in the debug panel, This list also monitors the value of the node, i.e., the value of its output datastream. The datastreams in the breakpoint list are always being watched, even if you have hidden their display. Further, if you disable an individual breakpoint in the breakpoint list, its output is still being monitored and (a summary of) its value will continue to be updated as that value is updated.
In addition, for most places where a node name is being displayed, clicking on that name queries dflibg in the running application. The node and its output datastream(s) are displayed in the watch panel's node header. Every node header (and the command bar in the examine panel) includes a [refresh] button, [] Click on it to requery for the node's current value(s).
Debug
is a full-featured dataflow debugger tool. In imperative Javascript, a debugger allows us to break at a particular instruction, inspect the 'machine state', and step through instructions to try to find where data manipulation went wrong.
In a dataflow debugger we follow the data – the dataflow events are the breakpoints. A breakpoint occurs when a node is about to be fired because because data became ready. Hence, we can
- set a breakpoint for a node,
- observe its inputs (and from whence they came)
- step through the transformation phases of data in the node, and then
- follow (step next, in, out, etc.) how data flows through the application.
Dataflow Threads
A completely new debugger strategy has being implemented using Dataflow Threads. Every external input – whether it is from a document event, dataflow from another thread module, window, or extension, an XMLHTTP response, etc. – creates a new dataflow thread. The thread begins with
- the injection node where the data/event was recognised,
- the proxy node where an AJAX response/acknowledgement was returned,
- the proxy node where a server/client's promise has been fullfilled and a response was returned.
The tail of the thread is the current node, the breakpoint. Please check the debug panel overview page for more details on the dataflow debugger.
A Dataflow Thread is completely independent of the Javascript stack! Hence, BOTH dfirefly AND the browser's Developer Tools may be used at the same time
Linkage
Good application design often dictates a modular approach – grouping together functions (in this case, nodes) that are highly cohesive, i.e., belong together. They may operate on the same data or stream of data as it flows through the dataflow engine (informational or sequential cohesion). They may focus on implementing a particular task or feature (functional cohesion). By its inherent nature, dataflow design is typically a combination of these.
Once functions (nodes) have been grouped (often as per the dataflow graphs or subgraphs) (dflibg-speak 'clusters'), best practices prefer that they be loosely coupled. One group (module, dfe, thread, ...) is unaware of the internal design of another. They 'share' information via well-defined, formal interfaces.
dflibg™ objects provide .client() and .source() methods to establish datastream linkages between such groups or modules, between a dfe in the current window and one in another, to/from a thread, an in-line frame, or even an extension component; between document events and 'listening' modules/nodes.
In addition, stream objects facilitate the ultimate loosely coupled interfaces - a negotiated datastream.
The linkage dfTool is used to identify and inspect those datastreams. For negotiated datastreams, it identifies the dataflows and the 'connection' state., etc.
Network
Network, shmetwork! Programmers using dflibg dataflow will never code another XMLHTTPrequest block, ever. All AJAX (Asynchronous Javascript and XML) communication occurs as the result of dataflow and defined behaviours for nodes. Nodes where data flows to/from external servers are Proxy Nodes. That is, they 'stand in for' or represent a dialogue with the external server.
The network facility provides the ability to monitor
- the datastreams flowing into the proxy node,
- the resulting output (the 'request' to GET, or the 'data' to POST, as applicable) from the proxy node to the server
- the response or acknowledgement from the server, and
- identify the distribution/propagation of the response/acknowledgement
Operation
For the network tool, the sidebar will show both true Proxy Nodes (those that may GET or POST and propagate the response to other nodes and/or clients) and Restricted Proxies (a node whose server response only flows to any attached document elements – not to other nodes and/or clients).
Select a Proxy Node to monitor and the click on [start] in the activity menu bar.
Dataflow Console
The dataflow console provides two features in separate panels. The first panel, the dflibg console log, captures dataflow log messages and exceptions issued by the dflibg library in the window for which dflibg dfirefly was activated.
I - dataflow console log
Tracking down just what caused runtime exceptions in any programming system is often fraught with utter frustration. The dflibg library-issued exception messages, catch the exceptions and map them to their origin. Not only is the exception identified, but where it occurred and why.
That is, dflibg-generated exception messages identify the specific node (and module) in whose implementation function the exception occurred. The messages also identify the origin of the error – the input to the dataflow engine and the node where that input was injected.
console log history
It is an unfortunate fact that the dflibg library, in the window/tab to be inspected, will have already issued many log and exception messages before dflibg™ dfirefly can be opened. Not to worry, all messages are cached and can be retrieved at any later time.
II - dataflow console command panel
The second console panel, the dataflow console command panel Is used to directly manipulate the dataflow nodes and application, much as typical developer tools' consoles allow inspection of javascript objects, invocation of functions, etc., by direct entry of javascript code.
While javascript can not be entered at a command prompt,
- data can be injected into the running dataflow engine,
- node configurations/properties may be modified
- selected library methods, module, node methods may be invoked.
Lint and the dflibg Dynamic Linker
dflibg™ lint does for dataflow – nodes, datastreams, modules and their graphs – what the Programmer's Workbench lint(1) did, what modern GCC compilers do for C/C++, and what some Javascript Lints do for imperative Javascript:
It identifies inconsistencies in the dataflow relationships, possible issues to investigate or verify design intention, etc.
Where traditional lint(1) identifies dead and unreachable code, mismatched call parameters, ambiguous usage, etc., dflibg lint identifies
- orphans, nodes with no inputs
- dead-end dataflows, nodes whose output goes nowhere, attaches to no document element, is never forwarded to another module, server, etc.
- inconsistent number of input datatreams, inappropriate or misused node types
- dataflow cycles and loops, recombinant paths
When the dflibg Dynamic Linker is activated, names and references unresolved at load/instantiation are resolved on-the-fly – when modules, CBF and document sections, etc., are loaded at some later time. dflibg unresolved may be invoked at any time to examine what references remain to be resolved.
For more information, see dflibg...Volume II - The Programmer's Desk Reference
Map/Graph
While dfe and modules form the static design of a dataflow application, the dataflow design – the dataflow graph of the paths data follows from its injection into the dataflow engine until it is exported or written to a document – is analyzed via map/graph
map
analyzes the dataflow in the application, unrolling the dataflow to create
- a list of the dataflow clusters
- a report on how those clusters overlap
graph
derives the graph – nodes and datastreams/arcs. Currently, a dot> language representation of the DFE(s) is generated and displayed in a separate window.
Until the javascript port of Graphviz has been integrated, copy the output and feed to dot(1), on your Unix™ or Linux system. If the graph visualization suite is not already on your Mac™ OS X, the download is available on the Mac App Store.
Reference Set
In the future, this will have links to dfTools topics in the dflibg Desk Reference. For now, you'll just have to buy the book, make a donation, or open a subscription.
www.dflibg.org/support