Finally I got the time to update all production systems to this latest Service Pack from Microsoft. A few days ago I tested the update process in a development environment: I achieved some good results, but I was wondering about what would happen to Automatic Update client service database and repository.
On the production environment I’ve had the chance to leverage the undo disk feature of Microsoft Virtual Server because all my Windows servers are running as virtual guests: this has allowed me to update in a worry-free manner and to found the best method to clear all the Windows AU data, on both the client and the WSUS server.
The update process on all my corporate core system has gone now: if there won’t be any issue during the next two days, I’ll probably update all the Windows Server systems in the Phoibos hosting platform on Easter holidays!
I have just completed the upgrade of my production systems to the latest revision of Windows SharePoint Services. After about two weeks spent in planning, testing and development, I choose the content database migration process: it has required some more steps and added complexity during the upgrading phase, but the final result is definitely the best one among all the approach suggested in the deployment guide.
More specifically, that upgrade approach allowed me to:
- migrate contents basing on the real needs, instead of forcing me to upgrade all sites and collections in one big step;
- obtain the highest levels of reliability, security and compliance, by running a new SharePoint 3.0 farm without porting settings from the old release;
- redraw the authentication model by using new several service accounts, and reducing Application Pools by 50%.
During the SharePoint upgrade I took the chance to upgrade also my Community Server platform to the 2.1 SP2 release, which is publishing my corporate blogs collection.
Now, I wish to work to redraw several applications which are based on the SharePoint platform (first of all, the Extranet access system) to better serve my customers by leveraging the new features of this exciting release.
After a long period of deferring, I have finally get the time to start the activities for upgrading my corporate application platform, based on the Windows SharePoint Services, to the latest version from Microsoft.
The first step was to learn about the supported upgrade procedures, on the WSS 3.0 Technical Library. Then I spent some hours to build a SharePoint web farm “cloned” from that which is running on my production systems, to be used for testing all procedures.
As specified on TechNet, I accomplished all the pre-upgrade steps on that development environment, thus operating the simplest approach (in-place upgrade), because my WSS farm has not undergone so many customizations, and I have no strong downtime limits to comply with.
By following a quite simple step-by-step procedure, I was able to see the fist results in a couple of hours: all virtual servers, application pools and site collections were upgraded in a seamless manner by running the SharePoint Products and Technologies Configuration Wizard, after I installed .NET Framework 3.0, SharePoint 3.0 and the new language packs.
A few additional steps were required to finalize the upgrade from the Central Administration web interface.
The only actions I had to do to address some issues:
After that, I was able to uninstall the old language packs and WSS 2.0 from my development systems.
One important thing to remember is the large amount of space needed to maintain WSS diagnostic log files: you may need to dedicate a VHD to it, if you want logging.
By any other point of view, the new SharePoint version is somewhat of exciting! I finally get convinced to convince some professional developers to spend the necessary time in trying to extend all the applications I currently use and to build new solution on this wonderful platform. 😉
On Friday, 23th February my primary Internet connection link has gone because of the usual incompetence of my provider. Since I noticed the trouble as soon it raised, my customers did not suffer any service break.
The problem was to obtain dynamically what was told to be a static IP in the contract, which did later revealed to be a DHCP reservation. Of course, only if I opened a support service ticket, I’m still waiting for an answer. During the whole day I tried to obtain my IP, by bringing offline and online the dialer and the ATM interfaces on my Cisco router, but only after near 25 hours I was able to come back online.
Thanking one more time my double-connected network infrastructure design, I restored the original RR in my DNS zone, and I went back at work. 🙂
As usual, when holidays come, I have the time to do whatever I missed in the year. Finally our corporate PC full provisioning system has reached the production state! It has taken me some hours to resolve many issues about WMP11 deployment and I had to deal with a time-consuming MSI repackaging, but now I can press F12 after the POST on a PC, write down my domain credentials and wait for the RIS (Remote Installation Service) by Windows Server 2003 and the software distribution feature by Intellimirror doing the work for me. When I come back (about 1 hour later), the client is setup, all corporate policies have been enforced, all programs have been deployed and patches applied: I only have to log on and simply start using my new PC… everyone working with a PC at business should have that at Christmas! 😀
Until now, all administration of my two Cisco Internet routers was done over vty terminal sessions, using the telnet transport protocol. Since an access list was placed to allow only vty connections from internal networks, it has been not a great security issue, but I was always sensing some “disease” every time I wrote the enable password in my terminal window! :S Obviously I had enabled in the past the SSH server included in that wonderful thing which is the Cisco Internetwork Operating System (IOS), but with my old SSH client (OpenSSH) I had a bad interaction with it. At that time I blamed the IOS for it… only today, by retrying with a different SSH client (PuTTY), I realized my mistake! I hope it has been the last time I have a doubt about the IOS quality… now all my management traffic (and authentication) flows encrypted between hosts of my internal networks, and I can finally go to bed without having nightmares about security concerns. 😉
Yesterday morning, instead of starting to do what I had to do, I began fixing a couple of problems which were annoying me from a lot of weeks.
One of the main concerns I solved was the need to enter the same credential each time an external user gains access to a web site published by my array of ISA servers.
In effect, ISA 2006 come with the SSO Web Listener functionality, but when I last worked on it, I had more urgent tasks to accomplish, so I did not find the time for testing and bringing it in the production environment.
My first impression in a test environment was good, so I spent some hours (obviously after midnight) to activate this function on the production servers. Some troubles, mostly due to the Outlook Web Access application configuration, but at the end I reached the goal: now you are requested for credential one time only, and the user experience in accessing the Phoibos online services has been dramatically improved (have a try with it at http://mailhost.valsania.it/)… thanks you one more time, ISA! 😉
Yesterday I realized that my primary public DNS server was not reachable from the Internet. Suddenly I thought it was a firewall rule problem, since I recently had migrated the single ISA Server 2004 virtual machine to a new ISA 2006 high-available array, but I was wrong: nothing about the ISA configuration, neither about the ifconfig on the published BIND server.
It has took me almostÂ one day to realize that the randomly behavior of the publishing service was due probably to the NLB driver running in multicast mode. When it process UDP requests as common DNS lookups, it creates an association between the client and the NLB node: this is called “client affinity”. When the affinity is established between a resolver and the GE1FW02 node, it seems there are problems related with aÂ timeout serving the request.
I have temporary workarounded this problem by publishing the BIND server with a reverse DNS proxy rule, so by making ISA Server change the header of IP packet to show this internal IP as the source address for all the lookups coming from the External network, but that’s not all… I want to know if the problem really show up only with DNS lookups are served by the second NLB node, by stopping the first node but, obviously, without creating any disruption to external Internet clients. I’ll post the results later…
Last week I noticed that HTTP/S connections to my primary ISP public address were randomly dropped for short times. Since this little trouble was affecting the user experience of the Phoibos services customers, I worked a couple of days to find a solution.
The cause of the problem revealed to reside in the Web Listener component of ISA Server 2004 Standard Edition: in effect, the configuration of my firewallÂ (thatÂ I deployed toÂ workaround my ISPs low reliability) with two external NIC targeted to different gateways and some other tricks,Â was something complex and obviously unsupported by MSFT… 😉
The best solution was to deploy an array of ISA Servers (only possible with the Enterprise edition) to have them working as a load-balanced gateways both to access the Internet and to publish my servers on the Internet.Â Furthermore the bi-directional affinity functionality granted by the new NLB services on Windows Server 2003 was also the best solution to publish the same service simultaneously on both public IP addresses, which was what I needed.
The migration from my old ISA 2004 single-server deployment to the new ISA 2006 array has been a little more complex than I thougth, mainly because the fact that all the ISA machines I wanted to deploy were hosted onÂ two physical Virtual Server 2005 R2 hosts (if you have ever had to configure NLB clustersÂ in a virtualized infrastructure you know what I mean…). After some troubles I decided to setup the NLB services out of the control of ISA services to be able to make NLB working in multicast mode (that’s theÂ bestÂ option if you mustÂ have virtual guests by different virtual host “converged” in the same Virtual IP).
At the time I’m writing the new solution has been deployed by some hours, and all seems to work very well and, obviously, in a more available and secure way. I think there are a few adjustments I stillÂ have to make… hoping to have as few troubles as possible! 😀
In order to test the new SQL Server 2005 Enterprise failover cluster I decided to bring back online the SharePoint development lab, now running on a production web server. This demo, as a part of the development roadmap of the Phoibos project, is intended to be used by all authorized people for testing purposes, first af all for storing and managing all corporate files which usually reside on their internal file servers, so leveraging them from the need to maintain, syncronize and protect this local source of data for all their corporate employers.