Let me open this post with an apology – I have been pulled away from supporting the Log Viewer (and the Config Utility) for far too long and correspondingly this site has suffered as well. So for all the users of these tools – I apologize. I plan to devote time to both sets of tools in the coming weeks. Thank you for hanging in there and finding value in these tools despite the absence.
Ok, with that out of the way-
View 6.0 is here and with it – you guessed it – changes to the log file format. In a previous post (here) I detailed a significant change to the log parser with the intent of making future parser updates much easier. I am happy to say that investment is paying off. Minor changes were required to support both View 5.3 and View 6.0. Previously, weaving in the new data detection and extraction would have required significantly more effort and would have been much more fragile. As it is, things are working almost exactly as designed (I did make a few framework tweaks mostly around package visibility but I should have made those in the first place, so this was more of a correction than new development).
That being said, looking through the new files I am seeing a lot of information that is being reported differently, old information that was previously gone but is now back, existing data that is no longer there, and some data that is entirely new. Some of this new data might turn out to be of great value. I’ll get into some of those differences in a moment, but it’s these discoveries that lead me to believe I’ll have at least one more follow-up release for View 6.0 once I confirm the value of this new data.
Reaching the Plateau – or Not
One change you will notice is that Plateau no longer seems to be tracked:
View 5.3 log file:
Tx thread info: bw limit = 1276, plateau = 1152.0, avg tx = 5.3, avg rx = 4.6 (KBytes/s)
View 6.0 log file:
Tx thread info: bw limit = 150, avg tx = 27.0, avg rx = 7.5 (KBytes/s)
So, to maintain simplicity, I am just setting Plateau to ‘0’ in the graphs. From what I can tell, the value didn’t move as has happened in previous releases, it’s just not in there any longer. If I hear about or find some way to derive it I’ll add it back in. Plateau was a good indicator of how much bandwidth PCoIP “thought” it could use at any given time so I do think some value has been lost here.
Loss conditions seem to have a completely new logging format with some new data not found in previous logs:
06/20/2014, 09:33:20.028> LVL:3 RC: 0 MGMT_PCOIP_DATA :ubs-BW-loss: -----> lost=1 06/20/2014, 09:33:20.047> LVL:3 RC: 0 MGMT_PCOIP_DATA :ubs-BW-tcp1: -----> lost/xmit=0.20/85.47, rate=0.2391 06/20/2014, 09:33:20.047> LVL:3 RC: 0 MGMT_PCOIP_DATA :ubs-BW-tcp2: -----> active[kbit/s]=12338.8543, TCP fair bw[kbit/s]=3317.7040, MSS=1100.00, RTT(aged)=124.1, loss=0.0391 06/20/2014, 09:33:20.047> LVL:1 RC: 0 MGMT_PCOIP_DATA :ubs-BW-decr: Decrease (loss) loss=0.001 current[kbit/s]=5819.2823, active[kbit/s]=12338.8543 -> 5734.3661, adjust factor=1.46%, floor[kbit/s]=104.0000
Tracing to the endpoint is also something I came across. It might be fun to build a network map out of this if the data is valid and usable. (IPs changed to protect the guilty):
06/20/2014, 09:24:58.026> LVL:2 RC: 0 SOCK_UTIL :trace_route: ----> trace complete. 30 hops detected 06/20/2014, 09:24:58.026> LVL:2 RC: 0 SOCK_UTIL :hop [ 1] -- rtt : 2 ms, : <1 ms, : <1 ms -- ip: 18.104.22.168 -- name: view.somehost.com 06/20/2014, 09:24:58.026> LVL:2 RC: 0 SOCK_UTIL :hop [ 2] -- rtt : 2 ms, : 1 ms, : 2 ms -- ip: 22.214.171.124 -- name: 126.96.36.199 06/20/2014, 09:24:58.026> LVL:2 RC: 0 SOCK_UTIL :hop [ 3] -- rtt : 2 ms, : 2 ms, : 1 ms -- ip: 188.8.131.52 -- name: 184.108.40.206 06/20/2014, 09:24:58.026> LVL:2 RC: 0 SOCK_UTIL :hop [ 4] -- rtt : 34 ms, : 37 ms, : 35 ms -- ip: 220.127.116.11 -- name: 18.104.22.168
And lastly some new image stats that look VERY interesting:
06/19/2014, 12:09:01.723> LVL:2 RC: 0 MGMT_IMG :Image stats [MBits/s]: Total=0.428667 Lossy=0.120958 Lossless=0.242432 Build=0.0652768 (Lossy 0.0652768) Cache=0.000947985 06/19/2014, 12:09:01.723> LVL:2 RC: 0 MGMT_IMG :Image stats [MPPS]: Lossy=0.195575 Txt=0.162369 Bkgnd=0 Motion=0 TileCache=0.00347088 - deprecated - values duplicated below 06/19/2014, 12:09:01.723> LVL:2 RC: 0 MGMT_IMG :Image stats [MPPS]: Changed=0 LS(txt+bkg)=0.162369 LY(chg)=0.195876 LY(bld)=0.297299 LY(LS_res)=0 06/19/2014, 12:09:01.723> LVL:2 RC: 0 MGMT_IMG :Image stats [MPPS]: Motion=0 zero_offset_tile_cache_hits=0.00347088 offset_tile_cache_hits=0 temporal_cache_hits=0 06/19/2014, 12:09:01.723> LVL:2 RC: 0 MGMT_IMG :Image stats [MPPS]: LY_mec_restart=0 LY_txt_restart=0
First, it looks as if some previous stats like build/delta/lossy/changed have returned after getting lost somewhere around the 5.x time frame. Before I throw those back in, I need to validate them a bit more. But more importantly, I believe this to be the first real appearance of CACHE stats in the logs! That might be very interesting! I have put in some requests to Teradici to help make sure I can interpret these values correctly and once I have that confirmed I suspect I’ll need to add a few new graphs in.
Getting the Update
If you already have the Log Viewer installed, you should get the update on the next launch. For those who do not have it, you will need to have Java 1.6+ installed and you can find the JNLP file here: http://hirstius.com/webstart/pcoip/PCoIPLogViewer.jnlp
As always, I expect there to be bugs. I also have not validated the parser against EVERY possible build of View 6.0 though I did try to cover what I believe is the beta and the GA release. If you see an error saying no valid parser can be found, please reach out in the comments and I’ll arrange some way for you to send me the file so I can figure out what went wrong. Same thing for other errors I just don’t have a large sampling of data files to test against right now but I wanted to get this initial update out quick in the hopes that it helps at least some people right off the bat.
In closing, I want to say thank you again for your patience and I hope you continue to find this tool useful.
First off, an apology. This update took way too long to get into your hands, but here it is… In this post, I’ll explain the new update, and then likely dive into far more detail than anyone ever wanted to know about the new parsing framework included in this release.
Update for View 5.2
Goodbye standalone parser
With this release the PCoIP Log Viewer now supports parsing View 5.2 logs. There’s a pretty major change in how this functionality is provided – View 5.2 log files should only be parsed directly from the Viewer. Let me repeat that – the standalone parser will not work for 5.2 log files, you now parse these directly in the Viewer. For any log files prior to View 5.2 you will still need to use the standalone parser. I realize this is likely to cause some confusion and frustration, but I had to make a break from the “legacy” parser if I was going to maintain my sanity. It also sets the stage for some future development I hope to do, and which I’ll talk more about later. This also means, for the moment, if you depend upon embedding the parser into your VDI images to pre-process the log files before opening them in the Viewer (and I know some people have scripted this) that you are out of luck. There’s two ways to look at this – I am the Dark Lord Satan out, specifically, to ruin your life, or, your scripting is now simplified in that you no longer need to pre-process the files or add an executable to the master image/clones. Keep the logic that moves the log files out, and then just open THOSE in the Log Viewer. However, if I get enough hatred and anger, I am certainly open into looking at making a new standalone parser based off the new framework. The catch with that is that it may require Java as a pre-req, or need to be wrapped into a much larger executable, so it’s not a perfect replacement.
Opening View 5.2 log files
With the log file parser now integrated into the Viewer itself, using the new View 5.2 support is as easy as dragging and dropping a “raw” logfile into the Viewer window, or opening the file via the Open Log file menu option. Just to be clear by “raw” logfile, I mean the ‘pcoip_server_<year>_<month>_<day>_<id>.txt (pcoip_server_2013_05_17_00000b90.txt) file. You can still open regular XML files from the standalone parser the same way you always have. Once opened things should look and work more or less as they always have, no new changes to learn there. Here’s a video of the basic usage (except you now CAN open files from the file dialog, and the parser warning is gone):
Getting the Update
All you should need to do is launch the PCoIP Log Viewer while connected to the internet and it should auto-update. If you don’t have the Log Viewer installed already you’ll need Java 6.x or greater installed, then you can go here to get it: PCoIP Log Viewer Webstart
This was a big update, while it doesn’t look like it on the surface the underpinnings of the parser are a major update (as you’ll see below) so I expect there to be bugs. Please let me know what you run into. If you can please include a link to the log file causing the issue. Contact me here (via comments) or on twitter @rexremus and I’ll do my best to get things fixed up. I hope you enjoy this update and get some good use out of the tool. If all you care about is the update – stop here, the rest of this post is going to dive into the details of the new parser and why I wanted to make this change. If that doesn’t sound interesting, get out now while you still can!
New Parsing Framework
The Need for Modular and Extensible Parsing
The previous standalone parser rapidly grew into a mess. Why? Because I had developed an unhealthy desire to have one singular parsing “engine” that could handle anything thrown at it. There are several ways to do this, and the least desirable among them is really what the parser had become over the years – it was a singular parser instance with a ton of exception/quirk-checking to be able to deal with all variants of the PCoIP log files.
This is a trap that many of us fall into. You begin by writing what you expect will be a quick and dirty solution to a problem with no real thought of modularity, flexibility, or maintenance. Then people actually start using it, so now you have a community to support. A minor change needs to be made (like a small change to the content you are parsing) and you just lay it in, no need to refactor things for such a small change, right? Well, one small thing becomes 12, and some of those end up being pretty significant changes that are made worse because of previous changes that required some really hacky workarounds. And next thing you know, you are maintaining a complete mess.
This is where things were at in the standalone parser after the release of View 5.1. That was a major update and required jumping through so many hoops in the existing code I had vowed (to myself anyway) I would not do it again. If I had to change the parser again I was going to build it new, and take the lessons I learned from supporting the original version to make something easier to maintain going forward. I’d given up all hope of the log files staying consistent at this point – and by consistent I don’t mean never adding new data, that’s always to be expected; by consistent, I mean not altering the way existing data is presented and/or ordered so that only new data was required to be dealt with on a new release.
So once I had the time to deal with an update to 5.2 (after hearing from many people that 5.2 logs were not working) I began working out a simple, I hope, to maintain parsing framework.
The New Framework
The basis of the new framework isn’t anything new or exotic, it’s just a few changes that should help make things much more flexible going forward. At a really high level, I just decomposed the monolithic parser into a few component parts as shown below:
Here you can see a parsing component, a log file “model” component, and a formatting/output component. The flow of running a document through here works mostly as it’s shown – the parsing component is responsible for converting the “raw” input into an intermediary container – the PCoIPLogFile object – which can then be fed into a formatting component that takes the intermediary container and “renders” it out to a new form.
All of these steps existed in the monolithic parser, but there was no easy way to extend or change any of the individual pieces. Here, each component represents a unique entity and much better encapsulates the specific functionality it is responsible for. Next we’ll break down some of the individual pieces and talk about how they are implemented
The log parser box in the high-level diagram really represents a base class which can then be extended to create version specific parser instances.
I decided to base the parsing side of the new framework around a SAX-like implementation. If you are unfamiliar with SAX I will give a mangled, tremendously abbreviated breakdown of it here- SAX is used to parse XML documents and uses an event-based model. Rather than trying to load the entire contents of an XML document into memory (that is DOM parsing), SAX defines a set of methods that will be called at various meaningful points as the document is parsed such as startDocument(), endDocument(), and startElement(). These allow you to take action within the context of the event, for example creating your intermediary object in the startDocument() method, and cleaning it up in the endDocument() method. For extracting specific information from XML elements startElement() methods get passed information about the element that triggered the event which you can then examine and determine if it’s an element you care about. SAX is great when you have to parse very large files as it’s much more memory efficient than a DOM parsing implementation, and it gives you flexibility in that you can extend existing classes that implement all required methods (far more than I covered here) and only override the ones you care about – which may be as simple as just implementing the startElement() method.
When creating the new parsing framework I decided to use an abstract base class so that I could provide a handful of methods that I don’t see changing very much, as well as house a few data structures and constants that I knew from prior experience would be used throughout the framework. I could have (maybe should have?) gone with an interface here and then created a base parser class off of that and then have all “real” parser implementations extend that base class, but it seemed like a wash in the end as I was not sure the extra flexibility would be of significant benefit. Building a parser for these log files is a non-trivial exercise, it’s not just 1-2 methods you might tack onto another class and quickly flesh out. So in this case, I went with an abstract base class and handled as much core functionality as I could there.
Looking at a PCoIP log file we see that lines have a basic structure:
06/13/2013, 03:28:01.456> LVL:2 RC: 0 MGMT_SYS :UDP port values: base=4172, range=10, set=4172 06/13/2013, 03:28:01.456> LVL:2 RC: 0 COMMON :load_crypto_lib: FIPS mode enabled=0 06/13/2013, 03:28:01.488> LVL:2 RC: 0 MGMT_SSIG :(tera_xml_init) -- Software Build ID: soft_pcoip_rc_3_12 22.214.171.12451 06/13/2013, 03:28:01.503> LVL:2 RC: 0 COMMON :Initializing Winsock2 06/13/2013, 03:28:01.519> LVL:2 RC: 0 SCNET :PCoIP Soft Crypto Module Build ID: trunk 126.96.36.19958 06/13/2013, 03:28:01.846> LVL:2 RC: 0 MGMT_ENV :Setting ENV variable[ 7]: pcoip.tcp_port = 4172
If we break this down a little we see that every line starts with a timestamp, followed by a logging level, an RC code, a line “type” or category, and then the actual data or content of the line. I decided to model my SAX-like parsing around the line type element. The base parser will run through the file and call specific methods for each line type encountered (COMMON, MGMT_SSIG, SCNET, etc.) as well as methods for the file start and end. Each of these methods are defined as abstract in the base parser class and must be implemented in any concrete classes.
I then created a default implementation of a parser (DefaultPCoIPLogParser) that is based around parsing a View 5.2 log file. In this class I provide implementations of the methods for every line of “interest” in the log file needed to display it properly in the Log Viewer. This base implementation can then be extended when the log file format changes again, and only the methods that deal with the lines (really, the line types) that changed will need to be updated. This greatly simplifies dealing with new formats because there’s no need to worry about backwards compatibility as there was in the monolithic parser – you can do whatever hacky, ultra-version-specific parsing tricks that make dealing with the current log file as easy as possible without any concern of breaking the parsing of a previous version and then let the existing methods of the parent class deal with everything else that didn’t change. In fact, if the inherited method doesn’t blow up or cause any other “damage” to the data placed in the intermediary object, you can call it from your overridden method, and then come back and “correct” or populate only the data pieces affected by the new log format – this way you don’t need to copy/paste or otherwise replicate the existing parsing from the parent class and it saves you more time when creating the new parser version.
This is an intermediary object used to store the important data parsed from the log file. It’s purpose it’s really just to be a bridge between the input and output of the parsing process – a parser generates a log file object, and a formatter uses it as input.
This isn’t to say the log file object is just a dumb container. It does contain a few methods, the most important of which is the validate() method. Because of the move to a SAX-like parsing front-end it became harder to determine if a log file was “good” or “bad” from within the parser alone. Certainly things like an individual line can be validated, but does one missing line, or one malformed entry mean the entire file is useless? Probably not, but it might, it just depends on which line it is and possibly the context around it. Rather than trying to put that kind of logic into what essentially amount to “stateless” methods within the parser (or going through a lot of effort to give them state) it made a lot of sense to just encapsulate the notion of “validity” into the object storing the entire document state itself. Once the parsing is done, the parsing workflow validates the log file object before passing it to a formatter. If the file is invalid, an exception is thrown and can be dealt with. This greatly simplifies error checking in that parser methods need only check for validity of a specific line (as that’s all the context they can be guaranteed to have) and the log file object itself can check for overall validity given that it can assume the point where the validate() method is being called, it should be considered a “complete” representation of a valid file.
Just as the parser box in the high-level diagram represented an extensible component, the same is true for the formatter box. In this case though, I did go with an Interface as there’s only a singular method to be implemented and it’s quite possible to just tack it onto an existing class if the formatting required is not terribly complex. For more complex formatting it’s likely a dedicated class will be built which can then be sub-classed as required to tweak or change behavior.
As you can see there’s just one method – format() – that returns an InputStream containing the formatted output. This allows the resulting formatted content to be pretty easily stored to a physical file, or to be piped around and read from in memory. Java’s streams (and all their variants) are fairly rich and well understood at this point so this seemed a pretty low-level and flexible result to get out of the formatter.
For the sake of the current work, I’ve created an XML formatter to allow for the existing visualization chain to be used in the viewer. But I can easily imagine an Excel based formatter to generate a physical file, or a JSON formatter to allow the parser/viewer to move to a cloud-based application and send data down to a browser or mobile device. All of these should be fairly easy to implement.
Gluing it all together – The Factory
Well, saying “all” might be a bit of a lie, but I wanted an easy way to make sure you always get the “right” parser, despite there only being one that’s of any use right now. So I decided to build a ParserFactory to do that.
Basically, the Factory will scan the classpath looking for PCoIPLogParser implementations and then register them with itself to build a catalog of all available parsers. Within the app, when we need to parse a file we can then call the cleverly named getParser(logFile) method. This method examines the raw log file and extracts the “version” of the file based upon certain elements, it then queries every registered parser for it’s supported log file version. When it finds a match, it returns that parser. Currently there’s not much thought given to conflict resolution, e.g. what if more than one parser can handle a given version of file? But my thought on that is, within any given project, you probably wouldn’t need more than one way to “parse” a file – you might want more than one way to format it, but really you only care about getting it into the PCoIPLogFile object and then working with it from there. So while more than one parser capable of dealing with a View 5.2 log file might exist, it isn’t likely that you’d see them together in the same project where they might cause collisions.
The horrible diagram above shows (poorly) the basic flow of parsing a file:
- Get the file
- Ask the ParserFactory for a parser, passing it that file
- The factory determines the version of the file
- The factory queries all the registered parsers for their ability to parse it
- A valid parser capable of parsing the file is returned from the factory
- The parse() method is invoked on the file and the PCoIPLogFile object is returned
- The PCoIPLogFile is validated (this is where you can bail out before trying to format)
- If the PCoIPLogFile object is valid, it is formatted
- The returned InputStream is used as required – written to disk or passed to another method/process for display or further manipulation.
I might also consider a factory for the formatter classes as well, something where you can pass in a desired format and get the right formatter back – or be informed no such formatter exists – and go from there. But it seems rare outside of a web context (RESTful service) that any individual usage of this would require such flexibility so it wasn’t high on my list. For now, you know what format you need and can just go with that. Anyway, I digress…
Wrapping it up
With the completion of this new parsing framework, parsing PCoIP log files going forward should now be easier to deal with. Still frustrating, but easier. At the very least, there is now a level of encapsulation and separation of duties that was missing from the organically “grown” monolithic parser that started it all. The real test will be when the log viewer breaks again, as it inevitably will, and I am required to make the first updates. That’s when I’ll know if this was worth doing. But in my view, it’s fun to make updates like this. Is my design perfect? Hardly. Are there probably a dozen libraries/frameworks I could have tapped to build this in? Almost certainly. But they often add a lot of baggage and “weight” and if you didn’t build your entire project against those frameworks it often means the benefits are limited or that you end up starting over. Building this from scratch was satisfying on multiple levels. It’s something I had wanted to do for a while, so I have the satisfaction of completing it. It’s only as large and complex as it needs to be – which isn’t very much of either. And lastly, it should allow me to support the users of this app better than I have been able to do in the past.
In the end, that’s all I really wanted.
At VMworld US 2012 in the EUC2620 session, I spoke about both the PCoIP Log Viewer, and PCoIP Configuration Utility. Unfortunately, the “cool” stuff in the PCoIP Configuration Utility that I showed in that session wasn’t publicly available. Those features were/are being utilized inside a few select VMware customer environments to gather some feedback but I had not posted the code here at MindFlux – until now.
So, here it is! The new and improved PCoIP Configuration Utility.
I have decided to remove the beta moniker and call this code drop Version 1.0. I consider this a decently “feature complete” drop of the code, but as with all software, it’s never done, and bugs almost certainly remain. Unlike Google though, I feel compelled to take things out of beta after a reasonable amount of time :)
- PCoIP Connection Health Monitoring (See screenshots)
- Monitors various PCoIP connection stats and determines an overall health score
- Health status can be easily determined via the systray icon with states ranging from “green” to “red”
- A new health status popout that shows the metrics being tracked and their current health “scores”
- Profile Tooltips
- Hovering over a profile within the “Apply Profile” menu will now show a tooltip with the detailed settings for that profile
- PCoIP Server CPU Utilization
- This should now be accurate, previously, as load increased this value would scale out of proportion to the actual load leading to inflated CPU utilization values
- Session disconnect awareness
- The utility now detects when there is not an active PCoIP session and when there is not, it enters a “sleep” mode that will consume less CPU than the active polling mode. Why burn CPU if there’s no user connected right?
- Upon re-connection the utility will cleanly reset and resume stat collection in active polling mode. Previously, stats were not properly reset after a disconnection event causing some ugly numbers in the stats popout window.
Requirements, Issues, Implementation details, etc.
- .NET 3.5 is required
- VMware View 5.0 or 5.1
- Tuning profiles and last used profile are stored in HKCU and in theory should be able to roam with a user to whatever desktop they happen to land on – if you have something setup to allow the profile to roam, that is.
- The utility will request admin access – it needs this to be able to write to HKLM which is where the PCoIP settings are read from. You may need to get clever while launching it if you want to avoid UAC nagging you about it.
- This seems to come up often in the comments and in other feedback, right now there’s not an easy fix for this. I have reached out to Teradici to get these settings moved under HKCU – we’ll have to see where that leads. This CAN be fixed but most options are kind of a “cure worse than the disease” solution that I’m not ready to move towards just yet.
- GPO Conflicts
- If you plan to use this tool to manage the tuning settings of a desktop you should avoid also setting tuning parameters via GPO on that desktop. GPO can still be used for things like clipboard sharing policy, or USB device policy, just avoid any of the settings controlled via a profile.
You can grab a copy of the utility here: PCoIPConfig.exe
There are a lot of existing enhancements left for the current tool. But the biggest issue still remains the required disconnect/reconnect. There are some potentially promising developments along that path though, so stay tuned. Please share your thoughts for enhancements and/or bugfixes in the comments.
I have updated both the parser and the Log Viewer so that they should work with VMware View 5.1 GA log files.
To update the viewer simply launch it while connected to the internet and it should auto-update.
The updated parser can found here. It should report version 1.0.0134 when passed ‘-h’.
If you need to freshly install the viewer you can find it here.
One item of note – it seems that delta bits and delta build bits have been removed in this release so there will be no data reported for them in the graphs.
As always please report issues in the comments and PLEASE provide some feedback on my previous post: A note on View 5.1 Support for the PCoIP Log Viewer
Some of you may know by now that the PCoIP log files are the bane of my existence. Since I began building the log viewer almost 2 years ago now, I have been in a constant battle to try and build a parser that can handle these ever-changing files. So far I’ve managed to make it happen with each release, thought not without much fighting, clawing, and gnashing of teeth.
Once again, with View 5.1 I find myself in a position of chasing a new file format that clearly was never intended to be machine parsed. Data is randomly placed, and formats and positioning data are altered from previous releases – seemingly for no good reason. New data is mingled with old and some data has simply vanished entirely.
With the 4.x series of releases the changes were there but relatively minor from release to release. With the move to 5.x and the introduction of the APEX2800 card, the changes are much more significant. I assume this is because someone is making the assumption that if people want to monitor PCoIP they’ll use the WMI counters to do so and that the logs are irrelevant. However, it’s not always feasible to get direct access to be able to monitor via WMI and in some areas the logs do contain more information than the WMI counters have available. So it’s been a worthwhile struggle to keep updating the parser to get meaningful data out of the logs.
At this point though, the changes for both View 5.0 and 5.1 look to be quite significant. And updating the existing parser for these formats while retaining the old formats is becoming a significant challenge. I am thinking it might be time to reconsider the entire parser framework to make it more easily expandable and maintainable for future releases. This means, without question, a longer downtime before View 5.1 support will appear. It also quite possibly means the end of a standalone parser as my direction would be a cloud-based design and/or integration of the parser into the viewer directly.
I’d like to try and solicit some feedback on these new directions, so here’s a few questions, please respond in the comments:
- Do you currently use, and see value in, the continued support of a standalone parser executable?
- Do you see value in supporting log file parsing for View releases prior to 5.0 in a new parsing framework? (the standalone parser as-is will still be available to parse older files)
- If you had to rank in order of preference a cloud-based parser vs. an integrated parser in the log viewer which would you prefer to see first?
- Do you see the need for continued log file parsing support or is WMI support alone enough going forward?
I will determine the future course of Log Viewer development based upon your answers (or lack thereof).