Skip to content

Recent Articles


PCoIP Log Viewer update for View 6.0

Let me open this post with an apology – I have been pulled away from supporting the Log Viewer (and the Config Utility) for far too long and correspondingly this site has suffered as well. So for all the users of these tools – I apologize.  I plan to devote time to both sets of tools in the coming weeks.  Thank you for hanging in there and finding value in these tools despite the absence.

Ok, with that out of the way-

What’s New

View 6.0 is here and with it – you guessed it – changes to the log file format.  In a previous post (here) I detailed a significant change to the log parser with the intent of making future parser updates much easier.  I am happy to say that investment is paying off. Minor changes were required to support both View 5.3 and View 6.0.  Previously, weaving in the new data detection and extraction would have required significantly more effort and would have been much more fragile.  As it is, things are working almost exactly as designed (I did make a few framework tweaks mostly around package visibility but I should have made those in the first place, so this was more of a correction than new development).

That being said, looking through the new files I am seeing a lot of information that is being reported differently, old information that was previously gone but is now back, existing data that is no longer there,  and some data that is entirely new. Some of this new data might turn out to be of great value. I’ll get into some of those differences in a moment, but it’s these discoveries that lead me to believe I’ll have at least one more follow-up release for View 6.0 once I confirm the value of this new data.

Reaching the Plateau – or Not

One change you will notice is that Plateau no longer seems to be tracked:

View 5.3 log file:

Tx thread info: bw limit = 1276, plateau = 1152.0, avg tx = 5.3, avg rx = 4.6 (KBytes/s)

View 6.0 log file:

Tx thread info: bw limit = 150, avg tx = 27.0, avg rx = 7.5 (KBytes/s)

So, to maintain simplicity, I am just setting Plateau to ‘0’ in the graphs.  From what I can tell, the value didn’t move as has happened in previous releases, it’s just not in there any longer.  If I hear  about or find some way to derive it I’ll add it back in.  Plateau was a good indicator of how much bandwidth PCoIP “thought” it could use at any given time so I do think some value has been lost here.

Other Changes

Loss conditions seem to have a completely new logging format with some new data not found in previous logs:

06/20/2014, 09:33:20.028> LVL:3 RC:   0 MGMT_PCOIP_DATA :ubs-BW-loss:  ----->  lost=1
06/20/2014, 09:33:20.047> LVL:3 RC:   0 MGMT_PCOIP_DATA :ubs-BW-tcp1:  ----->  lost/xmit=0.20/85.47, rate=0.2391
06/20/2014, 09:33:20.047> LVL:3 RC:   0 MGMT_PCOIP_DATA :ubs-BW-tcp2:  ----->  active[kbit/s]=12338.8543, TCP fair bw[kbit/s]=3317.7040, MSS=1100.00, RTT(aged)=124.1, loss=0.0391
06/20/2014, 09:33:20.047> LVL:1 RC:   0 MGMT_PCOIP_DATA :ubs-BW-decr: Decrease (loss) loss=0.001 current[kbit/s]=5819.2823, active[kbit/s]=12338.8543 -> 5734.3661, adjust factor=1.46%, floor[kbit/s]=104.0000

Tracing to the endpoint is also something I came across. It might be fun to build a network map out of this if the data is valid and usable. (IPs changed to protect the guilty):

06/20/2014, 09:24:58.026> LVL:2 RC:   0       SOCK_UTIL :trace_route: ---->  trace complete.  30 hops detected
06/20/2014, 09:24:58.026> LVL:2 RC:   0       SOCK_UTIL :hop [ 1] -- rtt [1]:     2 ms,  [2]:    <1 ms,  [3]:    <1 ms -- ip: -- name:
06/20/2014, 09:24:58.026> LVL:2 RC:   0       SOCK_UTIL :hop [ 2] -- rtt [1]:     2 ms,  [2]:     1 ms,  [3]:     2 ms -- ip: -- name:  
06/20/2014, 09:24:58.026> LVL:2 RC:   0       SOCK_UTIL :hop [ 3] -- rtt [1]:     2 ms,  [2]:     2 ms,  [3]:     1 ms -- ip: -- name:  
06/20/2014, 09:24:58.026> LVL:2 RC:   0       SOCK_UTIL :hop [ 4] -- rtt [1]:    34 ms,  [2]:    37 ms,  [3]:    35 ms -- ip: -- name:  

And lastly some new image stats that look VERY interesting:

06/19/2014, 12:09:01.723> LVL:2 RC:   0        MGMT_IMG :Image stats [MBits/s]: Total=0.428667 Lossy=0.120958 Lossless=0.242432 Build=0.0652768 (Lossy 0.0652768) Cache=0.000947985
06/19/2014, 12:09:01.723> LVL:2 RC:   0        MGMT_IMG :Image stats [MPPS]: Lossy=0.195575 Txt=0.162369 Bkgnd=0 Motion=0 TileCache=0.00347088 - deprecated - values duplicated below
06/19/2014, 12:09:01.723> LVL:2 RC:   0        MGMT_IMG :Image stats [MPPS]: Changed=0     LS(txt+bkg)=0.162369     LY(chg)=0.195876     LY(bld)=0.297299     LY(LS_res)=0
06/19/2014, 12:09:01.723> LVL:2 RC:   0        MGMT_IMG :Image stats [MPPS]: Motion=0     zero_offset_tile_cache_hits=0.00347088     offset_tile_cache_hits=0     temporal_cache_hits=0
06/19/2014, 12:09:01.723> LVL:2 RC:   0        MGMT_IMG :Image stats [MPPS]: LY_mec_restart=0     LY_txt_restart=0

First, it looks as if some previous stats like build/delta/lossy/changed have returned after getting lost somewhere around the 5.x time frame. Before I throw those back in, I need to validate them a bit more.  But more importantly, I believe this to be the first real appearance of CACHE stats in the logs! That might be very interesting! I have put in some requests to Teradici to help make sure I can interpret these values correctly and once I have that confirmed I suspect I’ll need to add a few new graphs in.

Getting the Update

If you already have the Log Viewer installed, you should get the update on the next launch. For those who do not have it, you will need to have Java 1.6+ installed and you can find the JNLP file here:

As always, I expect there to be bugs.  I also have not validated the parser against EVERY possible build of View 6.0 though I did try to cover what I believe is the beta and the GA release. If you see an error saying no valid parser can be found, please reach out in the comments and I’ll arrange some way for you to send me the file so I can figure out what went wrong. Same thing for other errors I just don’t have a large sampling of data files to test against right now but I wanted to get this initial update out quick in the hopes that it helps at least some people right off the bat.

In closing, I want to say thank you again for your patience and I hope you continue to find this tool useful.


PCoIP Log Viewer Updated for View 5.3

Well, like an idiot, I’ve had this update ready to go for a while (before xmas of 2013) and had left it in a beta state.  I never heard anything bad back from the beta testers (which could be good or bad I guess!) but I’ll assume things are working ok.

So it’s probably time to move this over into “prod” and get it out there as I’ve had a few requests recently for this update.

So long as you are only parsing View 5.2 or newer log files you can do it all in the Log Viewer itself under the new parsing framework. You can find the Log Viewer here.

If you need to parse older log files (5.1 or older) you’ll need the standalone parser for that.  You can find it linked inside the content here.

For those of you who already have the Log Viewer installed it should auto update – if it does not, or if it breaks somehow, please uninstall it via the Java Control Panel console (or appropriate interface for your chosen OS), and then re-install it from the JNLP file linked above.

Thank you and let me know if you’re having issues with log files!  I had a very limited subset to test with when making the update so it’s quite likely there will be errors lurking out there somewhere.


Rethinking the PCoIP Config Utility

As I am doing some planning for View 5.3 and the new dynamic PCoIP tuning feature I am debating re-writing the PCoIP Config Utility from the ground up.  I’d like your feedback on a few items though, the first of which is this-

One “issue” with the utility as it stands today is that it requires Admin privileges to run.  This can be dealt with in various ways but there’s only really one reason it requires escalation – the PCoIP registry entries are in HKLM and not HKCU.  In order to get access to the keys and, most importantly, make changes to them, you need higher privileges.

This could be gotten around by changing the security on the Registry keys that house the PCoIP settings, but I don’t know if this would be something frowned upon by most users and/or their security teams.

So, I am asking:  Are people OK with things as they are and having the app require escalation, or would they prefer to make other changes and have the app run without escalation?  If so, what changes are they willing to do?

Also, I am open to other thoughts/options on how to implement this as well.  One option I tossed around a while ago was to install a system-level service with access to the required Registry keys and then have the front-end piece talk to that service to apply changes.  This way, you could install the service once into a master image and users could run the utility as-is and stop/restart it without any special permissions.  Is this a better way?

Please drop your thoughts into the comments.


Dynamic PCoIP in View 5.3

Dropping in a quick update from VMWorld Europe…

If you read my previous post you know that a new feature was added to View 5.2 that was stated as something along the lines of “GPO options take effect immediately for PCoIP”. Well, it was true but it didn’t exactly work as intended. It worked only with a very small number of options and hence was rendered not terribly useful.

I am happy to say that I have been working with near-release ready code for View 5.3 and this feature now seems to work as intended for pretty much all of the important tuning options. This is great news! It means I have to go back and dust off my work on the real-time tuner tool I had began building at the start of the year and combine it with some updates to the PCoIP Configuration Utility, but I believe I can deliver a really solid workflow for the creation, distribution, and (dynamic) application of PCoIP tuning profiles.

Stay tuned in the coming weeks and I’ll keep you updated as to how things progress.


PCoIP Log Viewer Updated for View 5.2

First off, an apology.  This update took way too long to get into your hands, but here it is… In this post, I’ll explain the new update, and then likely dive into far more detail than anyone ever wanted to know about the new parsing framework included in this release.

Update for View 5.2

Goodbye standalone parser

With this release the PCoIP Log Viewer now supports parsing View 5.2 logs.  There’s a pretty major change in how this functionality is provided – View 5.2 log files should only be parsed directly from the Viewer.  Let me repeat that – the standalone parser will not work for 5.2 log files, you now parse these directly in the Viewer.  For any log files prior to View 5.2 you will still need to use the standalone parser. I realize this is likely to cause some confusion and frustration, but I had to make a break from the “legacy” parser if I was going to maintain my sanity. It also sets the stage for some future development I hope to do, and which I’ll talk more about later.  This also means, for the moment, if you depend upon embedding the parser into your VDI images to pre-process the log files before opening them in the Viewer (and I know some people have scripted this) that you are out of luck.  There’s two ways to look at this – I am the Dark Lord Satan out, specifically, to ruin your life, or, your scripting is now simplified in that you no longer need to pre-process the files or add an executable to the master image/clones. Keep the logic that moves the log files out, and then just open THOSE in the Log Viewer.  However, if I get enough hatred and anger, I am certainly open into looking at making a new standalone parser based off the new framework.  The catch with that is that it may require Java as a pre-req, or need to be wrapped into a much larger executable, so it’s not a perfect replacement.

Opening View 5.2 log files

With the log file parser now integrated into the Viewer itself, using the new View 5.2 support is as easy as dragging and dropping a “raw” logfile into the Viewer window, or opening the file via the Open Log file menu option.  Just to be clear by “raw” logfile, I mean the ‘pcoip_server_<year>_<month>_<day>_<id>.txt (pcoip_server_2013_05_17_00000b90.txt) file.  You can still open regular XML files from the standalone parser the same way you always have. Once opened things should look and work more or less as they always have, no new changes to learn there. Here’s a video of the basic usage (except you now CAN open files from the file dialog, and the parser warning is gone):

 Getting the Update

All you should need to do is launch the PCoIP Log Viewer while connected to the internet and it should auto-update. If you don’t have the Log Viewer installed already you’ll need Java 6.x or greater installed, then you can go here to get it: PCoIP Log Viewer Webstart


This was a big update, while it doesn’t look like it on the surface the underpinnings of the parser are a major update (as you’ll see below) so I expect there to be bugs.  Please let me know what you run into. If you can please include a link to the log file causing the issue.  Contact me here (via comments) or on twitter @rexremus and I’ll do my best to get things fixed up. I hope you enjoy this update and get some good use out of the tool.  If all you care about is the update – stop here, the rest of this post is going to dive into the details of the new parser and why I wanted to make this change.  If that doesn’t sound interesting, get out now while you still can!

New Parsing Framework

The Need for Modular and Extensible Parsing

The previous standalone parser rapidly grew into a mess. Why?  Because I had developed an unhealthy desire to have one singular parsing “engine” that could handle anything thrown at it.  There are several ways to do this, and the least desirable among them is really what the parser had become over the years – it was a singular parser instance with a ton of exception/quirk-checking to be able to deal with all variants of the PCoIP log files.

This is a trap that many of us fall into. You begin by writing what you expect will be a quick and dirty solution to a problem with no real thought of modularity, flexibility, or maintenance.  Then people actually start using it, so now you have a community to support.  A minor change needs to be made (like a small change to the content you are parsing) and you just lay it in, no need to refactor things for such a small change, right?  Well, one small thing becomes 12, and some of those end up being pretty significant changes that are made worse because of previous changes that required some really hacky workarounds.  And next thing you know, you are maintaining a complete mess.

This is where things were at in the standalone parser after the release of View 5.1.  That was a major update and required jumping through so many hoops in the existing code I had vowed (to myself anyway) I would not do it again.  If I had to change the parser again I was going to build it new, and take the lessons I learned from supporting the original version to make something easier to maintain going forward.  I’d given up all hope of the log files staying consistent at this point – and by consistent I don’t mean never adding new data, that’s always to be expected; by consistent, I mean not altering the way existing data is presented and/or ordered so that only new data was required to be dealt with on a new release.

So once I had the time to deal with an update to 5.2 (after hearing from many people that 5.2 logs were not working) I began working out a simple, I hope, to maintain parsing framework.

The New Framework

The basis of the new framework isn’t anything new or exotic, it’s just a few changes that should help make things much more flexible going forward.  At a really high level, I just decomposed the monolithic parser into a few component parts as shown below:

No magic here, ok, maybe a little

No magic here, ok, maybe a little

Here you can see a parsing component, a log file “model” component, and a formatting/output component.  The flow of running a document through here works mostly as it’s shown – the parsing component is responsible for converting the “raw” input into an intermediary container – the PCoIPLogFile object – which can then be fed into a formatting component that takes the intermediary container and “renders” it out to a new form.

All of these steps existed in the monolithic parser, but there was no easy way to extend or change any of the individual pieces.  Here, each component represents a unique entity and much better encapsulates the specific functionality it is responsible for.  Next we’ll break down some of the individual pieces and talk about how they are implemented

PCoIPLogParser Component

The log parser box in the high-level diagram really represents a base class which can then be extended to create version specific parser instances.


Parsers are built in a SAX-like fashion and can be built for new log formats much more easily

Parsers are built in a SAX-like fashion and can be built for new log formats much more easily

I decided to base the parsing side of the new framework around a SAX-like implementation.  If you are unfamiliar with SAX I will give a mangled, tremendously abbreviated breakdown of it here- SAX is used to parse XML documents and uses an event-based model.  Rather than trying to load the entire contents of an XML document into memory (that is DOM parsing), SAX defines a set of methods that will be called at various meaningful points as the document is parsed such as startDocument(), endDocument(), and startElement().  These allow you to take action within the context of the event, for example creating your intermediary object in the startDocument() method, and cleaning it up in the endDocument() method.  For extracting specific information from XML elements startElement() methods get passed information about the element that triggered the event which you can then examine and determine if it’s an element you care about.  SAX is great when you have to parse very large files as it’s much more memory efficient than a DOM parsing implementation, and it gives you flexibility in that you can extend existing classes that implement all required methods (far more than I covered here) and only override the ones you care about – which may be as simple as just implementing the startElement() method.

When creating the new parsing framework I decided to use an abstract base class so that I could provide a handful of methods that I don’t see changing very much, as well as house a few data structures and constants that I knew from prior experience would be used throughout the framework.  I could have (maybe should have?) gone with an interface here and then created a base parser class off of that and then have all “real” parser implementations extend that base class, but it seemed like a wash in the end as I was not sure the extra flexibility would be of significant benefit.  Building a parser for these log files is a non-trivial exercise, it’s not just 1-2 methods you might tack onto another class and quickly flesh out.  So in this case, I went with an abstract base class and handled as much core functionality as I could there.

Looking at a PCoIP log file we see that lines have a basic structure:

06/13/2013, 03:28:01.456> LVL:2 RC: 0 MGMT_SYS :UDP port values: base=4172, range=10, set=4172
06/13/2013, 03:28:01.456> LVL:2 RC: 0 COMMON :load_crypto_lib: FIPS mode enabled=0
06/13/2013, 03:28:01.488> LVL:2 RC: 0 MGMT_SSIG :(tera_xml_init) -- Software Build ID: soft_pcoip_rc_3_12
06/13/2013, 03:28:01.503> LVL:2 RC: 0 COMMON :Initializing Winsock2
06/13/2013, 03:28:01.519> LVL:2 RC: 0 SCNET :PCoIP Soft Crypto Module Build ID: trunk
06/13/2013, 03:28:01.846> LVL:2 RC: 0 MGMT_ENV :Setting ENV variable[ 7]: pcoip.tcp_port = 4172

If we break this down a little we see that every line starts with a timestamp, followed by a logging level, an RC code, a line “type” or category, and then the actual data or content of the line. I decided to model my SAX-like parsing around the line type element.  The base parser will run through the file and call specific methods for each line type encountered  (COMMON, MGMT_SSIG, SCNET, etc.) as well as methods for the file start and end.  Each of these methods are defined as abstract in the base parser class and must be implemented in any concrete classes.

I then created a default implementation of a parser (DefaultPCoIPLogParser) that is based around parsing a View 5.2 log file.  In this class I provide implementations of the methods for every line of “interest” in the log file needed to display it properly in the Log Viewer.  This base implementation can then be extended when the log file format changes again, and only the methods that deal with the lines (really, the line types) that changed will need to be updated.  This greatly simplifies dealing with new formats because there’s no need to worry about backwards compatibility as there was in the monolithic parser – you can do whatever hacky, ultra-version-specific  parsing tricks that make dealing with the current log file as easy as possible without any concern of breaking the parsing of a previous version and then let the existing methods of the parent class deal with everything else that didn’t change.  In fact, if the inherited method doesn’t blow up or cause any other “damage” to the data placed in the intermediary object, you can call it from your overridden method, and then come back and “correct” or populate only the data pieces affected by the new log format – this way you don’t need to copy/paste or otherwise replicate the existing parsing from the parent class and it saves you more time when creating the new parser version.

PCoIPLogFile Component

This is an intermediary object used to store the important data parsed from the log file.  It’s purpose it’s really just to be a bridge between the input and output of the parsing process – a parser generates a log file object, and a formatter uses it as input.

Simplified view of the contents of a log file object

Simplified view of the contents of a log file object

This isn’t to say the log file object is just a dumb container. It does contain a few methods, the most important of which is the validate() method.  Because of the move to a SAX-like parsing front-end it became harder to determine if a log file was “good” or “bad” from within the parser alone.  Certainly things like an individual line can be validated, but does one missing line, or one malformed entry mean the entire file is useless?  Probably not, but it might, it just depends on which line it is and possibly the context around it.  Rather than trying to put that kind of logic into what essentially amount to “stateless” methods within the parser (or going  through a lot of effort to give them state) it made a lot of sense to just encapsulate the notion of “validity” into the object storing the entire document state itself.  Once the parsing is done, the parsing workflow validates the log file object before passing it to a formatter.  If the file is invalid, an exception is thrown and can be dealt with.  This greatly simplifies error checking in that parser methods need only check for validity of a specific line (as that’s all the context they can be guaranteed to have) and the log file object itself can check for overall validity given that it can assume the point where the validate() method is being called, it should be considered a “complete” representation of a valid file.

PCoIPLogFileFormatter Component

Just as the parser box in the high-level diagram represented an extensible component, the same is true for the formatter box. In this case though, I did go with an Interface as there’s only a singular method to be implemented and it’s quite possible to just tack it onto an existing class if the formatting required is not terribly complex.  For more complex formatting it’s likely a dedicated class will be built which can then be sub-classed as required to tweak or change behavior.

A log file formatter has a very simple interface

A log file formatter has a very simple interface

As you can see there’s just one method – format() – that returns an InputStream containing the formatted output.  This allows the resulting formatted content to be pretty easily stored to a physical file, or to be piped around and read from in memory.  Java’s streams (and all their variants) are fairly rich and well understood at this point so this seemed a pretty low-level and flexible result to get out of the formatter.

For the sake of the current work, I’ve created an XML formatter to allow for the existing visualization chain to be used in the viewer.  But I can easily imagine an Excel based formatter to generate a physical file, or a JSON formatter to allow the parser/viewer to move to a cloud-based application and send data down to a browser or mobile device.  All of these should be fairly easy to implement.

Gluing it all together – The Factory

Well, saying “all” might be a bit of a lie, but I wanted an easy way to make sure you always get the “right” parser, despite there only being one that’s of any use right now. So I decided to build a ParserFactory to do that.

Basically, the Factory will scan the classpath looking for PCoIPLogParser implementations and then register them with itself to build a catalog of all available parsers.  Within the app, when we need to parse a file we can then call the cleverly named getParser(logFile) method.  This method examines the raw log file and extracts the “version” of the file based upon certain elements, it then queries every registered parser for it’s supported log file version.  When it finds a match, it returns that parser.  Currently there’s not much thought given to conflict resolution, e.g. what if more than one parser can handle a given version of file? But my thought on that is, within any given project, you probably wouldn’t need more than one way to “parse” a file – you might want more than one way to format it, but really you only care about getting it into the PCoIPLogFile object and then working with it from there.  So while  more than one parser capable of dealing with a View 5.2 log file might exist, it isn’t likely that you’d see them together in the same project where they might cause collisions.

This is a terrible diagram. You know it, I know it, let's just move on

This is a terrible diagram. You know it, I know it, let’s just move on

The horrible diagram above shows (poorly) the basic flow of parsing a file:

  1. Get the file
  2. Ask the ParserFactory for a parser, passing it that file
  3. The factory determines the version of the file
  4. The factory queries all the registered parsers for their ability to parse it
  5. A valid parser capable of parsing the file is returned from the factory
  6. The parse() method is invoked on the file and the PCoIPLogFile object is returned
  7. The PCoIPLogFile is validated (this is where you can bail out before trying to format)
  8. If the PCoIPLogFile object is valid, it is formatted
  9. The returned InputStream is used as required – written to disk or passed to another method/process for display or further manipulation.

I might also consider a factory for the formatter classes as well, something where you can pass in a desired format and get the right formatter back – or be informed no such formatter exists – and go from there.  But it seems rare outside of a web context (RESTful service) that any individual usage of this would require such flexibility so it wasn’t high on my list. For now, you know what format you need and can just go with that.  Anyway, I digress…

Wrapping it up

With the completion of this new parsing framework, parsing PCoIP log files going forward should now be easier to deal with.  Still frustrating, but easier.  At the very least, there is now a level of encapsulation and separation of duties that was missing from the organically “grown” monolithic parser that started it all.  The real test will be when the log viewer breaks again, as it inevitably will, and I am required to make the first updates.  That’s when I’ll know if this was worth doing.  But in my view, it’s fun to make updates like this.  Is my design perfect?  Hardly.  Are there probably a dozen libraries/frameworks I could have tapped to build this in?  Almost certainly.  But they often add a lot of baggage and “weight” and if you didn’t build your entire project against those frameworks it often means the benefits are limited or that you end up starting over.  Building this from scratch was satisfying on multiple levels.  It’s something I had wanted to do for a while, so I have the satisfaction of completing it.  It’s only as large and complex as it needs to be – which isn’t very much of either.  And lastly, it should allow me to support the users of this app better than I have been able to do in the past.

In the end, that’s all I really wanted.