Monday, August 9, 2010

Scripting a 'Suspend Alarms' Function

In Kaseya we were used to a very handy feature in the monitoring system called Suspend Alarms.  We used this extensively during maintenance periods and one-off server reboots.  As you can guess, this would suspend the generation of any alarms during our maintenance.

I found that in LabTech there wasn't a similar function like we were used to in Kaseya.  In LabTech you opened the machine and checked the 'Disable Alerting' checbox under the info tab. This would certainly disable alerting, but our concern was that it could accidentally remain checked if the tech forgot to uncheck it.  Also, it really wasn't a solution to our maintenance window issue where we need to suspend alarms on large numbers of servers at one time.

The solution was to write a script to enable and disable the Disable Alerting.  Since there wasn't any specific function in the scripting to set this field, it required using a Run SQL script command.

update agentcomputerdata set NoAlerts = 1 where ComputerID='@computerid@'

Pretty simple, but then we wanted to ensure that it was set back to 0 some time later so my solution was to add two more steps to the script.  Step 2 was to put the script to sleep for a while, say 30 minutes then run another sql command to set NoAlerts back to zero (same sql statement with a 0).

I was pretty happy with that, but then Bill Morgan from LabTech suggested that we just add an enable and disable script to our maintenance script groups and then we wouldn't need to treat it as a separate item.  I think we might have come to that conclusion on our own as we decided *where* we would schedule this script, but it was another one of those ah-ha moments.

The best part of this is that it will cure one of our most common mistakes, rebooting a server without first suspending alarms.  Yes, about 50% of the time when a tech schedules a reboot (outside of or normal maintenance window), they forgot to suspend the alarms in Kaseya.  Because rebooting is a script AND disabling of alarms is a script, why not just combine the steps?  By combining these two scripts we will no longer even need to think about suspension of alarms.

DW

Wednesday, July 21, 2010

SNMP Continued

So the SNMP stuff was cruising along until we hit another little snag.  We ran into an issue where the system that we had the agent/probe installed had a conflict on port 69 for TFTP.  The problem was that the LabTech agent internal TFTP server would run before the systems TFTP server effectively preventing the system TFTP server from starting.  This was only an issue when the endpoints for the system (this happens to be a VoIP system) needed to refresh their configurations.  Since they used TFTP to get the updates, it would fail since the LabTech TFTP server had nothing for them.

The bad news *was* that we could not change or disable that in the LT agent.  I spoke to Kevin at LT about the issue a few weeks ago.  I spoke to Kevin and a couple other guys yesterday and they informed me tha the next maintenance release will include the ability to disable these via a registry setting!  Since we don't need those particular services for anything we are doing on the box it will be a perfect solution for us.  So now we are back in business (or at least when the update is released which should be very soon).

DW

Wednesday, June 9, 2010

SNMP Update

I haven't posted anything about this topic for a while.  The good news is that it seems to be working very well to receive traps.  The bad news is that the main reason that we needed to do this threw us a little curve.  It turns out that the device that we are receiving traps from sends many (a lot) of "informational" traps like software version number, etc every 30 seconds.  So we end up with 100's of traps received, but it is just a bunch of noise.

We are working with the vendor to get the proper OID's of the traps so that we can use the SNMP trap filtering to accept only what we want.  Once we get that we will be in good shape.

One thing that would be incredibly useful in the SNMP trap mechanism in LT would be the ability to send the name of the trap definition along with the alert.  Each entry you make in the SNMP trap table has a name/description field.  Unfortunately, this information is not passed with the other alert information.  I think that Kevin submitted it as an enhancement request.

It is my understanding that LT has some significant SNMP enhancements on the roadmap.  In my opinion, the current SNMP stuff is pretty good and at least better than what I was able to do with K.

DW

First 200 Kaseya agents removed

So we finally started removing Kaseya agents.  Yesterday we removed appx 200 agents across a number of clients.  We are moving slowly to ensure that we haven't missed anything, but I really feel like we are equal or better than we were in our workstation monitoring/alerting now.

The only issues so far have been primarily with getting MAC agents installed, but luckily we don't have a lot of them and we weren't doing much with the Kaseya agent on the MAC's anyway.  Labtech is helping us with this.

The other issue is that we get some false positive Master Computer Offline alerts.  I need to run this by LabTech support, but it doesn't seem to be causing us any issues.

Our biggest issue right now is that we have clients in two systems which will cause the techs a little grief.  The techs have been using LT for the majority of their work for the last few weeks, so it isn't too big of a deal.

DW

Wednesday, May 26, 2010

SNMP traps

We have been through a few RC builds of 2010 and we seem to be getting closer to getting the SNMP traps working.  The last build fixed some things, but didn't fully resolve the issues.  I worked with Kevin Davis on the SNMP for a while and he sent logs to development.  It sounds like they identified and corrected another issue so we will have another crack at it soon!

DW

Connectwise ticket integration

Last week I worked on getting the CW ticketing integration finished.  I had already setup most of what was needed when I setup the ConnectWise Plug-in to do the client imports, but there were still a few items to finish up.

My first couple tests were not successful so I submitted a ticket explaining how I tested and that no ticket had been created in CW.  Support asked for some additional information and said that it looked fine so they wanted to schedule a time to get on our system to investigate.  The next day I tried to import another client and couldn't locate them in the list to import.  In the past when that has happened, it was caused by not having the customer type set to match our plug-in filter setting and that was the case with this one.  Seeing that made me think that my ticketing issue could be related as my test was on an alert for our test company, not an actual client.  I tried creating a ticket from a 'real' alert and the ticket was created in CW instantly!

So, my take away on this is that the customer type filter setting in the plug-in is for more than just importing.  It seems to control the scope of what the plug-in will see which now makes perfect sense.

DW

Thursday, May 20, 2010

Scripts and permissions

During a quick training session with a couple techs I found out that they could not execute/schedule scripts.  I checked permissions on the users, groups, clients, etc, but couldn't find anything that would allow them to execute a script.  I sent it to LT support and they requested a screenshot of the user permissions. 

About an hour later I got a call from Kevin to review this.  He found the issue in the persissions for the individual scripts.  The one place I didn't look was *in* the script.  Each script has edit and execute permissions and since we had made some significant changes/additions to the security groups structure, the groups were not in the scripts.

Now I just need to either use one of the security roles that already existed (and add it to the techs) or add the tech security roles to the scripts.  I haven't decided which way to go yet.

So remember that the security for each script is *in* the script.  The ability to execute/schedule scripts (at all) is in the user/group security.

DW

Wednesday, May 19, 2010

Almost ready to start pulling Kaseya agents

We have a few things left before we can start pulling Kaseya agents (from workstations), but I think we are close.  Some of the items are:

1. Finish Connectwise ticket integration
2. Tweak a few monitors & scheduled scripts
3. Additional tech training
4. Get SNMP traps working.
5. Get a document to our clients describing the new way of submitting a ticket through the mgmt icon in the tray.

Hopefully we will be able to start the Kaseya removal by Friday afternoon.

DW

Tuesday, May 18, 2010

SNMP Continued

I worked with LT on the SNMP issue for about an hour, but was not able to get it resolved.  At this point I am not sure if it is our testing methodology or LT.  LT has been making changes to the SNMP Trap monitoring, so this could just be some side-effects of being on a release candidate.

For anyone that is interested, there is a good tool out there for sending SNMP traps.  It is called TrapGen and is a free download from NCT.
http://www.ncomtech.com/trapgen.html

To send a trap to a trap receiver, you simply use the command TRAPGEN -D xxx.xxx.xxx.xxx .

It will send a static SNMP trap to the device and is very useful for testing.  You can override the static trap info by using additional command line options.

I also did some packet captures to identify the SNMP traffic to ensure that it was being formatted correctly. 

DW

Thursday, May 13, 2010

SNMP Traps - Stuck

I thought I was home free on the SNMP Traps, but it turns out that I might have been a little too optimistic.  The initial success of having alerts when an SNMP trap was sent turned out to be less than I had hoped.  After I got those first few alerts I thought we were home free.  I put that aside for a few days and revisited it last night.  What I now realize is that all it is telling me is that the probe received a trap and that is it except for a number that appears to be a sequential number of some sort (serialized??).

To make matters worse, I was getting a pile of these alerts via email, so I decided to remove the SNMP setting for now, but I am still getting alert emails.

I will grant the fact that I am on a release candidate of LT2010, so the possibility of a bug is certainly a reality.  I am waiting to talk to LT support about it some more.

DW

Major problem - quick fix

Today we instructed techs to use LabTech for the portion of the client base that we have dual installed (LT/K).  The first issue we ran into is that the remote control wasn't working at many locations.  We figured out pretty quickly that the issue was caused by the fact that we did not configure the clients routers to allow TCP/70 out (which is the default redirector port for LT).  At first we thought that we would need to touch all of the routers that were tightly locked down (most of our clients are configured this way), but then we realized that we already have all of those routers configured to allow TCP/5721 out because that is what we needed for Kaseya.

I changed the configuration in LabTech to do the redirection connections on 5721, changed the router in the data center and BAM, all the remote control sessions would now work.  Zero time spent changing routers :-)

So, for anyone making the switch from K to LT, you might want to consider this if you have many clients that have locked down routers.  It saved us a couple days of time.

DW

Friday, May 7, 2010

SNMP Traps - I should have RTFW

So I setup a time to work with Kevin from Labtech on the SNMP trap thing.  Given my struggles with it in the past I chose to do nothing with it until I spoke to Kevin.  I got on the phone with Kevin and he walked me through adding an entry under the SNMP traps section of a machine configured as a probe.  For the test, we created an entry that would accept ANY OID from ANY host matching ANYTHING.  Kevin explained that a currect "limitation" is that the alert you get is limited to an email to the alert contact for that location.

Then we were done with the seteup.

I was a little shocked that it was that simple.  I really should have RTFW on SNMP traps.  It is a single page and there is a good reason why it is that short. Labtech SNMP traps WIKI page

I let Kevin off the phone and did some testing and it worked! 

DW

Tuesday, May 4, 2010

SNMP Traps

Now that we have the big rocks out of the way (at least we think we do), we need to concentrate on some of the finer points.  One thing that we always struggled with Kaseya was effective SNMP monitoring and alerting.  In all fairness I think it could have worked better, but it was a real pain and I always seemed to get inconsistent results.  Of course the Kaseya support was completely unable to assist me on this stuff.

I have already done some basics with Labtech and the SNMP functionality, but the one piece that I am anxious to get going is the ability of a Labtech endpoint to act as an SNMP trap receiver.  We have a special application that requires us to be able to receive traps from an SNMP enabled device as opposed to querying OID values and checking them.  This is purely catching the traps as they are sent and processing them.

During my initial discussions with Labtech we discussed this with Gregg Lalle and Bill Morgan.  At first they said "oh, sure no problem monitoring SNMP", I then clarified about the traps and Bill said he had to check on that.  By the end of the conversation Bill had checked with someone and told me that it absolutely could be done.  Nice!

Now I just have to figure it out.  I sent a request to Kevin for help on this, he is probably ready to kill me :-)

DW

Goal for the week

Our goal is to have at least 1000 of our managed workstations moved by this Friday.  Joe and Jason have their work cut out for them!

Group issue is resolved!

Kevin from Labtech emailed me yesterday morning that he had verified that he had a workable solution for our on-boarding and do-not-touch groups.  We setup a 1 hour meeting for noon to review it with him.  The way we are accomplishing the on-boarding is to create a custom field at the client level called 'on-boarding'.  When this box is checked, all machines that are under that client will be added to the group 'on-boarding'.  The on-boarding group is a master group which will remove machines from all other groups (except other master groups) and prevent it from being added to any others.  We simply ensure that there are no active scripts/hotfixes/alarms etc in the on-boarding group and everything is good.

The do-not-touch is pretty much the same where we have a group of the same name and a customer field (at the machine level) to do the same thing. 

One thing that this requires is that you do not make any other master groups as a machine *can* be a member of more than one master group. 

Two things that I really like about this solution is that it did not require us to modify every search in the system to *exclude* the on-boarding or DNT machines and it is ust a single checkbox (so very little in manual interaction). 

We feel that the solution is going to work perfectly for us.

Thanks to Kevin from Labtech!

Friday, April 30, 2010

Still trying to figure out groups and searches

One of our current hold-ups is the problem with have with keeping certain machines *out* of groups.  There are two issues, first we have a client on-boarding process where we install agents, collect data and analyze the reports to look for issues (ie, server that has never been patched). Once we identify and resolve issues, we turn on the monitoring/alerting/patching/scripting.  We fell this is a prudent measure to take as we certainly wouldn't want to have a bunch of stuff blow up on the day we install our agents.

With Kaseya, it was a pretty simple process.  We always created our agents with a *do nothing* template that simply put the machine into the proper client group and audited the machine for system and patch information.  Once we had that we could determine if we needed to take any special remediation measures before we turned on the full management.

With Labtech, we have not found a way to get the machines into the system without having them automatically pulled into all of the groups that provide the monitoring, patching and scripting.  I contacted Kevin at Labtech and he is working on it.

The other issue will be solved by whatever the solution to the first problem is.  That issue is that at a few client sites there are machines that we deem as *do not touch* machines.  These would be machines like laboratory equipment machines, manufacturing automation machines, etc.  Essentially we monitor these machines and do remote remediation, but do not patch/script/reboot them as usual.  We need a way to flag these machines and keep them in special groups.


At this point I am not sure if it is the design of the Labtech groups/searches that is making this difficult, or just the fact that we are not yet aware of how to make it work.

DW

Wednesday, April 28, 2010

Post update non-issue

After we updated to 2010 we noticed that 9 of the 100 test servers were not showing on-line (but they were checking in and accessible).  We determined that the agent had not auto-updated from 0.835 to 30.917 on those machines.  I contacted support and they had me try a couple things then asked for me to submit the lterrors.txt file from one of the servers.  A few hours later Matt from support asked me to verify that at minimum, .NET 2.0 was installed on one of the affected servers.  I checked and found that in fact it was NOT installed.  I installed .NET 2.0 and forced the agent update.  Guess what, it updated!

So the lesson here is that when they say that .NET 2.0 is required, they mean that .NET 2.0 is required.

I am not sure, but I guess that the agent installer does not warn you, or require you to have .NET installed first.  That would be a nice dummy check for them to add to the installer.

DW

Tuesday, April 27, 2010

3rd training session

We have been fairly idle on our migration since our last training session.  Until we had more information on how the groups worked we were not ready to continue.  Now that we had training on the groups and templates we have the better part of the missing pieces.

Once again Robert was a fantastic trainer and we had a great Q&A as the last part of the training.  We hit him up with some pretty tough questions and he was able to get us great answers on everything.

We are continuing to work on how we will be pulling this all together over the next week and hope to be ready to get a 1000 or so workstations on as a first wave.

A couple things we are still trying to figure out are:

1. Auto-join for groups.  Seems simple enough and works well for the initial population, but if a machine falls out of the scope of the search it is not removed from the group.  Not a good example, but for instance if you had a search that populated a group based on the OS being Windows XP, then later upgrade the OS to Windows 7, the machine would remain in the Windows XP group AND be included in the Windows 7 group.

This has to be something that we are missing as I couldn't imagine this not working the way we think it should.

2. How to 'stage' new machines.  Currently when we on-board a new client we install agents on all workstations and server with certain templates applied that do nothing but collect system information and patch data.  We then run reports on this collected data, discuss it with the technical team and the customers for wrap up any details and then we apply our monitor/alert/patch templates.  We don't want to just pull the trigger on a network that probably hasn't had proper management since it was setup.  Nobody want to blindly release 130 patches on an un-patched server.

We thought we had figured out how we could bring machines into LabTech and have nothing done to them until we moved them into the proper group, but then realized that the searches which do all the work really didn't care if the machines were in a particular group or not.  The searches are going to populate the groups as that is exactly what they are designed for.  So we need to step back and rethink that strategy.

I am sure that we will have some other items.

DW

Upgrade to Labtech 3.0 / 2010

As I mentioned in an earlier post, we were not able to get the security roles defined exactly as I wanted.  Kevin from Labtech confirmed with development on Friday that the 3.0 release (now called 2010) would in fact have the ability to create the security classes as I needed.  We planned the upgrade for noon today.  At noon Kevin called and we got the process under way.  Within about 20 minutes the upgrade was complete and we started working on the construction of the security classes.  In another 30 minutes he had created and tested the 4 security classes that I needed.

I was impressed at the upgrade process.  It was about as simple as any upgrade I have ever seen.

DW

Friday, April 23, 2010

2nd training session

Yesterday we had our second training session with Robert from Labtech.  Unlike the first session where we had pretty much had already done all of the things he showed us, this session was all new territory.  We covered Monitoring and Scripting.  When I say "covered" think brain surgeon showing you how to do brain surgery.  These subjects are both very deep because they both very powerful and flexible.

We are still planning how we are going to structure the groups, monitor sets and scripts.  We have learned a lot in the years that we have been using Kaseya.  We are going to take that experience and apply it to this new tool and I feel that we are going to get incredible results.

So I hope that my fellow Labtech admins have a great Friday.  I am going fishing today!

DW

Wednesday, April 21, 2010

Support

I submitted a couple tickets this afternoon regarding y permissions issues.  Within about an hour both tickets had been responded to.  The first one I wasn't crazy about.  It confirmed to me that you had to give super-admin rights for a user to see the searches.  I don't think I understand exactly how those searches work or if it is even necessary for anyone to see them on a daily basis.

The other response was a little over simplified telling me to set permissions on the groups, etc, etc.  LOL!  I knew that...  Tell me what permissions to set.

A short time later I got a call from a support person named Kevin.  Very nice guy.  We worked on the issues for about 15 minutes and then said that he needed to reproduce them in his lab setup.

I feel confident that we'll get it resolved.  Our next training session in tomorrow and then on Monday.  That will probably go a long way in getting us up to speed.

DW

Permissions revisited

I have spent the better part of the day trying to get the permissions tweaked so that everyone has what they need.  I basically need 3-4 types of users.

1. Service coordinator - Simple access to see if machines are on-line. No interactive use.

2. Technician - general ability to work on machines through the machine interface or remote control.  They should also have the ability to view monitors.

3. Labtech "basic" admin? - Ability to create/edit monitor sets, groups, alerts, etc.  (no user LT administration).  Admin permissions to *almost* all groups (restrict access to internal company PC's and servers)

4. Super-admin.  - As the names implies, all access.

1, 2 and 4 seem to be ok.  I cannot seem to get #3 accomplished.  It appears that I need to give super-admin access to my RMM manager team to get some of this accomplished. 

I was told by support that to view the searches you need to have super-admin rights.  I must not understand the design intent of the searches if you need to be a super-admin to view them.  I might assume that the searches are solely for populating groups which would make sense then.

I am still waiting on a response to find out how to get #2 & #3 to see the monitors.

DW

Tuesday, April 20, 2010

Day 12 - 100 servers on line

It has been 12 days since I ran setup on our production LT server.  In that time my "Kaseya" guys have made a lot of progress deploying agents on clients servers.  Our plan is still to get one server at every client site on LT and have the network probe running and collecting data.  When we are finished with that and have our templates and monitors ready, the remaining servers and all the workstations should be a fairly easy deployment.

The progress that Joe and Jason have made is particularly impressive due to the fact that they have been doing these deployments in between their normal workload.  Only today did we finally put in a ticket for this so it was officially on the schedule.

Knocking on wood, we have not had a single agent installation failure or other issue so far.

Monday, April 19, 2010

Voice modem fun

Now that the server was moved to physical hardware I was able to work on the voice modem setup.  I ran the Voice Modem setup from the server and clicked through the informational screens.  When I ran the test, I entered a phone number to dial, clicked ok and then.... Nothing.

I tried a few things, re-read the wiki and searched the forums.  I submitted a ticket and got a response today that said "This is a known issue that has been reported to development and should be fixed in the next release".

Then I got another email stating that the ticket was being closed because my issue was resolved.

So I replied to the ticket asking if the "next" version meant v3.0 (next major release) and also asked for clarification as to the bug being with the testing of the voice modem or the operation of it entirely.

I hope that they just mean the testing part *or* they are talking about a dot release which might be out sooner.

First training session

Today was the first of our training sessions.  This session covered server installation and setup, some basic navigation stuff, groups, security and agent deployment options.  Based on the work we have already done we could have skipped this session, but it was good to see that what we had already done was validated.

Our trainer Robert was excellent.  (I give him an A) He was obviously very organized and prepared, he didn't miss a beat.  He crammed a lot of content into 2 hours.  I am sure that if we hadn't done all the work that we had, we would have been dazed by the time it was over.  One thing that he did was to make us the presenter of the GoToMeeting and have us record the session.  What a great idea!  It took the pressure off of worrying about taking detailed notes.

Based on what we saw, we feel our current deployment process is valid so we will continue as we were.

We are looking forward to our next session where we will be working with monitors.

Friday, April 16, 2010

K2 test server goes bye-bye

 So after a discussion with a peer yesterday about their continued K2 issues, it struck me that we still have a K2 test server spinning away in the lab.  This morning I told the techs that they can safely re-purpose this server for other lab purposes.

Backup/Secondary addresses for agent check in

One thing that we always ensured to do with Kaseya was to set the agent check-in addresses.  The first address was the hostname to our Kaseya server and the 2nd was the IP of the Kaseya server.  The second address was to enable the agent machine to check in if DNS was not available.  This helped us on more than one occasion, but it had an odd side effect.  If the Kaseya agent was checking in on the secondary address you could *not* remote control the PC.  The temporary solution was to change the check-in control to make the IP address the primary and then remember to switch it back.

One of my checklist items was to determine how we do this in Labtech.  Once again the Wiki is my friend.. Labtech Wiki page for secondary agent address

Setting it up appears to be quite simple.  Just specify the addresses separated with a pipe symbol "|".  I laughed when I read the line that said "labtech is limited to 10 backup addresses".  Really, only 10?  Has someone ever needed 10 backup addresses?

4 days later..

The week has gone by pretty quickly.  We have been steadily plugging away with our deployment and it is going well. 

One thing that we did this week was to move Labteh from Hyper-V to a physical server.  The move had nothing to do with performance, it was simply to allow us to *easily* connect a voice modem to the server.  Connecting a serial modem to a Hyper-V host and then having a guest OS access that modem seemed to be more work than it was worth.  I did make a feeble attempt to accomplish this, but using an open source com port emulation program didn't look like a supportable method.  I could see the support call now...

ME: "uh, our voice alerts are not working.  We are getting this odd error"
SUPPORT: "Hmm, sounds like a bad COM port.  Can you try moving it to another COM port?"
ME: "Sure, let me get back to you when I move our server to physical hardware"

So we'll just avoid that possibiliity....

With guidance from one of our crack backup team guys (Phil) I was able to move the server from Hyper-V to a Dell PE1950 in a couple hours.  Actually the "guidance" was in the form of harrasment about the fact that I shouldn't be doing the move and a member of the backup team should be (some days I feel like I am in a Union shop).  That ended with me losing a $5.00 bet to him over how a Broadcom driver would install.

Now that we moved that to physical hardware I will get around to making the voice modem work.

Joe and Jason have been plugging away and now how about 50 servers deployed.  The network probes are returning plenty of information and it looks like we will be able to easily deploy endpoints when we are ready.

We have our first training session on Monday.

Monday, April 12, 2010

Deployment to clients

Now that we could import clients from Connectwise, setup proper permissions and deploy tools we decided that we were far enough along for a few deployments to a small group of clients.  We picked three clients and went through our basic process of importing, initial agent installation, setting the server as master and network probe, etc.  It took us about 45 minutes to slowly walk through this on 3 clients (we were documenting as we went).  Everthing went very smoothly and within a few minutes one of the locations started to return network probe results.  I checked a couple hours later and it appears that all of the network probe information has been collected.

Our plan was to deploy more tomorrow if this was a success.  I guess we will be deploying...

Bill Morgan rocks!

I asked our rep (Gregg) if we could talk to someone prior to our training to help us with a few issues that we were struggling with.  He setup a 1 hour call for me with Bill Morgan.  I'd have to say that was one of the most productve hours I have ever had with a vendor.  I had already *baically* figured out the security, but Bill showed me how to manage the permissions using the "view permissions" and the "effective permissions" in the user setup.

Bill helped me with a number of other items, some of which I have commented on in other posts, but here are some others.

1. Connectwise Import returned nothing.  I had the integrator login access level set to 'Only creted by integrator' instead of 'All records'

2. The tool deployments were not working.  No matter which tool I tried to deploy (CCleaner for instance), I received a log entry of 'Could not transfer file /transfer/filename'.  Bill quickly identified the source of the issue which was a missing virtual directory in IIS called transfer.  He did tell me that it wasn't something I missed in the setup, but an issue that occured at times where the installer failed to create the directory.  It tool him 30 seconds to correct. it.

3. Explained to me how to change the display name of a machine.  In kaseya, we would frequently change the names of agents when they were oddball OEM machine names.  It was not obvious at all how to accomplish this in Labtech.  He showed me that by adding a 'Friendly Name' under the Info tab it would now be displayed properly.

There were a number of other items that I can't recall now, but the only thing he answered that I didn't want to hear was how techs change their passwords in Labtech.  The answer is that they don't, the administrator does.  He did throw in the words "for now", so I assume that this might be a future enhancement.

So Bill gets a virtual high five from me today.  Thanks Bill!

Saturday, April 10, 2010

Started the ConnectWise integration

Last night I started the ConnectWise integration with Labtech.  I ran the installer and followed the instructions.  I was not prepared to complete all of the steps for ticket integration and such, but wanted to at least have the ability to import a client from ConnectWise.  Everything seems to be ok, but when I attempt to use the Import ConnectWise Client tool I only get an empty selection box of clients to import.  This in quite likely something I missed as I worked on it from about 1 AM until 2:30 AM.

Friday, April 9, 2010

More servers on line

I added 5 more of our internal servers to Labtech.  The agent installed quickly and I moved them into the proper location.

Attempting to install a MAC agent

We are having an issue installing a MAC agent from the service install web page.  We are getting "Page cannot be found" when clicking the link for the LabTechZ.dmg file.  It is there, so not sure what the issue is.  We'll ask support and see what they say.

Installing agents

Last night I spent more time working on agent deployment on machine in our office.  The network scanning works well once you figure out the proper sequence of making a machine a master, then network probe, then issuing a scan for hosts. 

Thursday, April 8, 2010

Permissions

I spent about an hour working on the various levels of group permissions.  My goal was to restrict access to groups based on what security level or group the user is a member of.  This is necessary for when we allow internal IT staff at our client site to utilize our RMM tools.  It took a little time to figure it out, but I see that the security levels in Labtech are *very* granular and flexible.

System, Client and Group permission information is fairly well explained on their Wiki. Labtech permissions

Control center installs

Some of the techs went through the process of installing the control center.  Most went well, one failed and we needed to use a work around to install the Crystal Reports 2008 components.  After that, the control center installed just fine.  We found the solution for that one in the LabTech knowledgebase.

Rolling out control centers

I sent the instructions out to the entire technical staff to install the control center.  So far I haven't heard anything negative.

Don't walk away from your desk..

So I submit the support ticket go for some coffee.  I come back to my desk and see a voicemail from Labtech support!  I called back and left a message.

About 2 minutes later, Matt calls me back and helps me reesolve the issue.  Problem was that I was using the MSI file to install instead of the .EXE file.  Anyway, 5 minutes later the control center is installed and going on admin machine #1.

Issue installing control center

So the basic server install is complete and I attempt to install the control center on the first tech PC. I get the following message at what appears to be the end of the install.

There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor.



So I submitted a ticket to support.

Wow!

I just stumbled across something in the documentation under "Alert Actions" that could be seriously fantastic. This alert action is called "Voice". The description for this is:

Voice
This will use the phone line to call the Alert Contact and play a Text to Speech Alert Message if answered.

Note: Requires a Modem in the LabTech Server connected to a working phone line.

I know there are other ways to accomplish this, but this is awesome!

Making progress

So after a little more setup (following the wiki) and a few router changes the server is up!

Moving on

Support got the key/IP issue worked out without any issue and the installation finished in a couple minutes. I am going through the startup questions to finish up.

SSL

While I am waiting for the key to be reset I decided to get the SSL cert setup. Nothing special here. We use godaddy.com for SSL certs. I setup a new cert and had it installed about 15 minutes later.

I should mention that I am basically following the LabTech setup instructions from their Wiki which has been very easy to follow. Of course this is the 2nd time (first time was for demo server).

Support response

So by 1:20AM, support responds to confirm that the license key is in fact tied to an IP address. They tell me that if we are installing at the *same* IP, there shouldn't be any issue. If it is a different IP then we need to reset the key (which is what we are doing). I replied telling them what we are doing and ask to have the key reset.

First impression of support response = pretty good.

Wednesday, April 7, 2010

Issue #1

Small snag. I entered our license key in the installer and was told that it was not valid becuase it was being used on another IP. As it turns out, the demo key that we used for the trial install became our permanent key after we purchased. Even though our eval server is now gone, it must still be registered for the old server IP. I emailed our sales rep and support, I am sure it will be an easy fix.

Server setup

Based on our pre-purchase testing (and the documentation), we determined that the we should be fine utilizing one of our existing Hyper-V servers to host our LT server. For perspectve, our existing K server is a Dell PE2850 Quad-Xeon x2, 16GB RAM, RAID 10 array with 6x15K drives. For our initial LT server we will have a single (virtual) processor and 4GB RAM.

I created a new Windows 2003 server hosted on our Hyper-V server and ensured we were fully patched. I installed IIS and SMTP services and then ran windows updates again.

After taking care of a few other prerequisits, I downloaded the LT Server installer 2.5i and launched the install.

So far I have a couple hours of time invested (on and off).

I am going to have a few beers and read a little more documentation then revisit this tomorrow.

Getting started

This blog is meant to track our progress of moving from Kaseya to Labtech as our primary MSP/Network management package. In the coming weeks we hope to transition 3000+ workstations and servers from Kaseya. We will also be exploiting the tight (hopefully) integration between Labtech and our ConnectWise PSA.

Wish us luck...