Tuesday, December 04, 2007

flog: Profiling Complexity

One tool that I should have included in my survey, but forgot, is flog, yet another great tool from Ryan Davis and Eric Hodel. flog is like a profiler for your code’s complexity instead of it’s performance1.

Why worry about complexity? Well, there are a three good reasons I can think of:

  1. If you’re dealing with legacy code, knowing where the real complexity is will help you prioritize your code reading as you try to figure out the code base
  2. In my experience, the complex little knots of code are where bugs are most likely to lie, so flog can tell you where to focus your testing
  3. Finally, those complex sections of code also become great candidates for refactoring—it’s always easier to debug, optimize, or add features to code that’s easier to understand.

flog is a gem so it’s easy to install, and once installed it’s easy to run. To run it against my LogWatchR tool, I just need to drop into the logwatchr/lib directory and do:

flog logwatchr.rb > flog.report

(Since this generates a pretty length report, I redirected it out to a file.) Here’s the trimmed output from running this:


  Total score = 211.720690020501
   
  WatchR#analyze_entry: (34.2)
     9.8: assignment
     7.0: branch
     4.5: mark_host_last_seen
     3.2: pattern
     2.8: []
     2.8: is_event?
     2.0: alert_type
     2.0: alert_target
     1.8: alert_msg
     1.8: notify
     1.6: event_notify?
     1.3: notify_log
     1.3: join
     1.3: split
     1.3: each
     1.3: now
     1.3: each_value
     1.3: record_host_if_unknown
     0.4: lit_fixnum
  WatchR#event_threshold_reached?: (31.6)
    21.3: []
     2.6: branch
     1.8: tv_sec
     1.6: -
     1.5: length
     1.4: >
     1.4: assignment
     1.3: >=
     1.3: mark_alert_last_seen
     1.3: delete_if
.
.
.

I’m skipping the report on WatchR#analyze_entry because, while its total score is higher than WatchR#event_threshold_reached?, it accumulates points a lot less evenly. The code for WatchR#event_threshold_reached? looks like this:


  def event_threshold_reached?(host, event_type, time)
    @hosts[host][event_type][:alert_last_seen].delete_if { |event_time|
      time.tv_sec - event_time > 
      @hosts[host][event_type][:alert_last_seen_secs]
    } 
    mark_alert_last_seen(event_type, host, time)

    if @hosts[host][event_type][:alert_last_seen].length >=
        @hosts[host][event_type][:alert_last_seen_num]
      true
    else
      false
    end
  end

The report shows a lot of complexity surrounding the hash key lookups. This corresponds to a change I keep meaning to make, but haven’t gotten around to. I think the whole nested hash structure is ugle and hard to maintain, so I’ve been planning on replacing it with a better object structure. It looks like flog agrees with me.

WatchR#event_dependencies_met? (not shown above) also reports a higher level of complexity based on hash traversal, so finally sitting down to make the change from a nested hash would be a win here too.

1 If you were looking for an article on profiling, you might also want to look at these:

1 comment:

  1. There is also Saikuro cyclomatic complexity analyzer

    ReplyDelete