To The Point

Creating Mega Drop Down Navigation in SharePoint with jQuery – Part 2

January 24th, 2010 by tdietz 13 comments »

This is part two in a three part series for implementing Mega Drop Down navigation in SharePoint 2007.  The topics covered are:

Part 1 – Tables, Unordered Lists, and SharePoint
Part 2 – Custom Control Rendering
Part 3 – Hooking it all up with jQuery

Since the out-of-the-box SharePoint menu control renders the HTML in <TABLE> elements,skinning and customizations are limited.  The rendering for my MegaDropDown is based on the <DIV> and <UL> elements and provides many more customization options.  Note:  In SharePoint 2010, the AspMenu control has an option to render with <UL> based elements.  I haven’t ported the MegaDropDown to SP2010 yet, but will post my results when I get a chance to convert it.

Once you bind to the navigation’s data source, you can easily walk through the collection of items and render HTML on the fly. I was doing this in my first iteration of the MDD, but things got a little complicated as I needed to parse all the available menu options, and then apply some custom jQuery logic once I had everything calculated.  That said, I opted for a two pass approach:

Pass 1 – Iterate through the navigation data source and build my own customized collection that stored some extended properties that were specific to the MDD.

Pass 2 – Iterate through the custom collection and render the HTML

While there is obviously an extra step involved (and another copy of the navigation), it really simplified the code and the overall process.

The custom collection essentially consists of several container classes that keep track of the navigation items (Title, Url, Type).  The first two properties are pretty obvious but Type is tailored for the MDD.  Type determines if the item is a standard menu item or is an image.  A benefit of the MDD is that is has the ability to display a site “image” within the navigation drop down itself (more on this later).

Code Snippet
  1. [Serializable]
  2. public class MenuItem
  3. {
  4.     public string Name;
  5.     public string Html;
  6.     public string Url;
  7.  
  8.     public MenuItem() { }
  9.  
  10.     public MenuItem(string name, string html, string url)
  11.     {
  12.         Name = name;
  13.         Html = html;
  14.         Url = url;
  15.     }
  16.  
  17.     public MenuItemType Type;
  18. }

Displaying and grouping items

There are two formats for displaying menu items: standard and grouped.  Standard is just what you would expect and menu items are rendered in the order defined in SharePoint.  Grouped items however, allow you to organize links into categories which can significantly simplify the interface when needing to display dozens of links.  A major requirement of the MDD project was to keep the customization as transparent and compatible with SharePoint as possible.  The grouping logic is one place where some customizations are visible.  In order to group items there needs to be a way to define the group name (i.e. category) as well as the links are associated with each category. 

Nav UnGrouped

MegaDropDown with uncategorized items

Nav Grouped

MegaDropDown with categorized items

The client I was working with was very adamant about using the standard SharePoint "Modify Navigation” UI so I am not entirely happy with the solution for grouping, but it does solve the problem without building any new web forms or pages.  A special syntax was created so an admin simply has to enter it within the standard UI.  When you want to specify that a link should fall within a specific group, you append the link title with $GroupName.  For example, if you wanted to create a group named Offices and add the Chicago office, you would enter:

Chicago$Offices

That’s it.  The parsing code will detect this and separate out the names automatically.  Here’s how it would look in the Navigation Link dialog:

image

As I said, I don’t exactly like the approach, but it is a pretty low-impact solution and took about 20 minutes to write the parsing code—and no UI changes were needed!

I keep track of each MenuItem in a generic List.  Since there are two types of dropdowns (standard and grouped), I keep track of each link type in a separate List. 

Code Snippet
  1. [Serializable]
  2. public class MenuNode : MenuItem
  3. {
  4.     public List<MenuGroup> Groups = new List<MenuGroup>();
  5.     public List<MenuItem> Items = new List<MenuItem>();
  6.     public List<MenuNode> Nodes = new List<MenuNode>();
  7.  
  8.     public MenuNode() { }
  9.  
  10.     public MenuNode(string name, string html, string url)
  11.         : base(name, html, url)
  12.     {
  13.     }
  14.  
  15.     public MenuState State;
  16. }

 

Keeping track of all of the navigation types is a little tricky, so hopefully the following diagram helps illustrate how the collections relate:

Once all the parsing is done and I am ready to render the items, I simply iterate through each list and generate the HTML:   For brevity, I’ve removed some of the housekeeping logic, but the RenderMenus() method walks the main List<> and calls RenderGroupItems or RenderMenuItems() accordingly:

Code Snippet
  1. private bool RenderMenus()
  2. {
  3.     int menuCount = 0;
  4.     string _menuItem = "<li class=’menuitem{0}’><a href='{1}’><span class=’menuitem{2}’>{3}</span></a></li>";
  5.     string _parentMenuItem = "<li class=’nav_sites{0} menuitem{1}’><a href='{2}’><span class=’groupmenuitem{3}’>{4}</span></a>";
  6.  
  7.     writer.Write("<ul id=’navigation’>");
  8.  
  9.     // renders the top level menu (horizontal navigation bar)
  10.     foreach (MenuNode node in _menuNodes)
  11.     {
  12.         string current = string.Empty;
  13.  
  14.         // if this site (or any sub-site within) is the current site, add a CSS style so we can hightlight it
  15.         if (node.State == MenuState.MenuSelected)
  16.             current = " current";
  17.  
  18.         // just a single-level menu item (no kids)
  19.         if ((node.Items.Count == 0) && (node.Groups.Count == 0))
  20.         {
  21.             output = string.Format(_menuItem, current, node.Url, current, node.Name);
  22.             writer.Write(output);
  23.         }
  24.         else
  25.         {
  26.             if (node.Groups.Count > 0)  // traverse all groups
  27.             {
  28.                 output = string.Format(_parentMenuItem, menuCount, current, node.Url, current, node.Name);
  29.                 writer.Write(output);
  30.  
  31.                 RenderGroupItems(writer, node, menuCount);
  32.  
  33.                 writer.Write("</li>");
  34.             }
  35.             else if (node.Items.Count > 0)
  36.             {
  37.                 output = string.Format(_parentMenuItem, menuCount, current, node.Url, current, node.Name);
  38.                 writer.Write(output);
  39.  
  40.                 scriptBlock = RenderMenuItems(writer, node.Items, menuCount);
  41.                 scripts.Add(scriptBlock);
  42.  
  43.                 writer.Write("</li>");
  44.             }
  45.  
  46.             menuCount++;
  47.         }
  48.     }
  49. }

Rendering Menu Items

For the most part, displaying each of the menu items involves the same logic except you iterate through each of the top level items menus.  As you read each item, you just emit individual <li> elements that represent each of the menus.

nav_gr

The end result of the HTML might look something like this:

Code Snippet
  1. <div id="navcontainer">
  2.     <ul id=’navigation’>
  3.         <li class=’menuitem current’>
  4.             <a href=’/Pages/Default.aspx’>
  5.                 <span class=’menuitem current’>Home</span>
  6.             </a>
  7.         </li>
  8.         <li class=’menuitem’>
  9.             <a href=’/news’>
  10.                 <span class=’menuitem’>News</span>
  11.             </a>
  12.         </li>
  13.        
  14.         <li class=’nav_sites0 menuitem’>
  15.             <a href=’/locations’>
  16.                 <span class=’groupmenuitem’>Locations</span>
  17.             </a>
  18.  
  19.             <div id=’menu0′ class=’menu’>
  20.                 <div class=’menunavitems’ id=’menunavitems_1′>
  21.                     <table cellpadding=’0′ cellspacing=’0′ class=’navlinklayout’ width=’100%’ border=’0′>
  22.                         <tr>
  23.                             <td class=’grouplayout’ valign=’top’>
  24.                                 <h3 class=’cat_header’>Offices</h3>
  25.                                 <ul class=’sub_nav’ id=’1′>
  26.                                     <li>
  27.                                         <a href=’/offices/chicago’><span>Chcago</span></a>
  28.                                     </li>
  29.                                     <li><a href=’/offices/newyork’><span>New York</span></a></li>
  30.                                 </ul>
  31.                             <td>
  32.                         </tr>
  33.                     </table>
  34.                 </div>
  35.             </div>
  36.         </li>
  37.        
  38.         <li class=’menuitem’>
  39.             <a href=’/reports’>
  40.                 <span class=’menuitem’>Reports</span>
  41.             </a>
  42.         </li>
  43.  
  44.     </ul>
  45. </div>

 

The code is fairly straightforward and I tried to make the HTML as simple as possible.  The entire navigation, including the mega drop down sits inside the main <div> container ‘navcontainer’.  The ‘navigation’ <UL> handles the horizontal top ‘tabbed’ navigation menu.  Since it is <UL> based, I style the layout and appearance with CSS.  Each <LI> represents a tab defined in SharePoint’s navigation page. 

When sub-navigation items exists (e.g. sub-sites), a new <DIV> (menu) is defines the drop down itself.   Although I tried to follow a flexible <DIV> based layout, the sub-items list is tabular and fits nicely in a <TABLE>.   If grouping exists, each group (or category) is displayed with a styled <H3>.  All the sub-items are then displayed with an embedded <UL> (sub-nav).

In my next and final article, I will explain how I hook it all up with jQuery.  (I promise that it won’t be nearly as long between postings).

As always, comment or email me if you have any questions.

Building a SharePoint Development Farm

November 28th, 2009 by tdietz 6 comments »

I have finally been able to get some free time and build out a SharePoint Development Farm. For the past six months I have been using a virtualized SharePoint farm running VMware workstation on top of Windows 7. While some thought went behind the initial design, the farm had been showing signs of fatigue and it was apparent that it really could not support the number of users needed.

In addition to the DEV farm, I was running standalone SharePoint farms in VMware on my laptop. While this worked better than I expected, it certainly had its share of issues.

Like most consultants, I had VMs spread all over—VMs for sales demos, projects, and personal sandboxes. It always seemed that either a feature that I wanted to demo or some code that I wanted to work with was on a VM that was offline and I would constantly be switching back and forth between VMs.

In addition to running a development farm, I was also using  VMware Workstation on my laptop for various development and project activities.  While this worked better than I expected, it had its share of issues.

Requirements

There certainly are many areas to consider when building a farm, let alone a SharePoint farm. While this is my personal development lab (that I will end up using on client projects), I wanted to follow the best practices that I promote to my clients and define the goals and requirements up-front. It is worth mentioning that I was building this for myself and did have to stay within a specific budget.  It should not be considered a recommendation or blueprint for a production environment.  With that in mind, the following is a short list of the major goals for the new environment.

Isolation

  • The SharePoint farm needs to support several types of users—mainly designers and developers.  Many of their responsibilities overlap, however each user approaches their environment uniquely and the farm should accommodate their needs—users should not have to radically change their process to match what the farm provides. 
  • A mixture of SharePoint 2007 and SharePoint 2010 environments is needed and isolation among them is needed.  While Sandbox Solutions in SharePoint 2010 seems provide a fair amount of isolation, it is a feature that is only available in SharePoint 2010. 
  • Each user should have their own OS, database, and SharePoint farm.
  • The environment should support a mixture of MOSS 2007 and MSS 2010 farms.

Scalable and Performant

  • SharePoint provides a very flexible architecture that can easily scale out to handle load. In this environment however, load in terms of user requests for a particular farm is not as important.  In terms of scalability, the environment needs the ability to support additional farms.
  • The farm should consist of multiple, independent farms each connecting to their own SQL Server instance.
  • A centralized Active Directory domain should provide overall management of users across each farm.
  • Creating new instances should be accomplished with as much minimal effort as possible.  The overall day-to-day management should be automated as possible so that only minimal manual intervention is needed.
  • Specific farms should be fairly portable and not tied to any host machine; individual instances should be able to be moved to a new physical hardware with minimal effort.

Development

  • The farm should be fairly flexible and allow developers complete freedom over their environment. 
  • The farm should provide core services such as source control and continuous integration.
  • The source control solution should be flexible and allow developers to work remotely without requiring a direct connection (e.g. VPN) to the source control repository.

Solution

While the requirements defined are fairly high-level and straightforward, I wanted to make sure that I at least captured them so that the environment would support them.  Given the fact that the old host machine (on Windows 7) was quickly overloaded, I decided that having multiple host servers in the farm would be ideal.  As I opted to set up a completely new farm, and I love shiny things, I chose the latest and greatest and went with Windows Server 2008 R2 and SQL Server 2008.  Although I have read a little on SQL Server 2008 R2, I have not kept up on it so SQL 2008 Standard was my choice.

Software

Application Package
Operating System Windows Server 2008 R2 Standard Edition x64 (MOSS 2010 Beta 2 requires KB976472)
Database SQL Server 2008 SP1 x64 (MOSS 2010 Beta 2 requires KB970315)
Virtual Machine Management System Center Virtual Machine Manager 2008 R2 (SCVMM)
SharePoint 2007 SharePoint 2007 Enterprise SP2 (Plus OCT09 Cumulative Update)
SharePoint 2010 SharePoint 2010 Beta 2 Enterprise
Development Tools (2007) Visual Studio 2008 SP1
Development Tools (2010) Visual Studio Ultimate 2010 Beta2
Source Control (Server) VisualSVN Server (Subversion 1.6.6)
Source Control (Client) VisualSVN Client (Subversion 1.6.6)

Architecture

As mentioned earlier, I wanted to have an environment that would handle the bulk of the workload and would not be subject to serious degradation of performance when running multiple virtual machines (and the occasional LFD2 break).  With the old Windows 7 VMware Workstation environment, disk access seemed to be the primary cause of poor performance so I wanted to insure that this would not happen again.  I also wanted to make sure that everyone was not at the mercy of a single VM host server.  I have always built my own servers so if this interests you, here are the specifications for the environment (it is a mixture of spare parts and some new equipment).

SharePoint Farm
Host Server 1
Component Description
Motherboard ASUS P6T Deluxe
CPU Intel Quad Core i7-920 2.66 GHz
Memory G.Skill Ripjaws DDR3-1333 4GB x 4 (16GB total)
System Drive Western Digital VelociRaptor 300GB
Data Drive 2TB RAID-10 Array (4 1TB Seagate Barracuda ES.2 drives)
 
Host Server 2
Component Description
Motherboard ASUS P5K-E
CPU Intel Quad Core Q6600 2.40 GHz
Memory Mushkin DDR2-800 (PC2 6400) 2GB x 4 (8GB Total)
System Drive Western Digital VelociRaptor 300GB
Data Drive 1TB RAID-5 Array (3 1TB Seagate Barracuda ES.2 drives)

 

Server 1 is the primary Hyper-V host and with the Intel i7 offering quad core with Hyper-Threading, Windows detects that eight CPUs are available!—Hyper-Threading is a bit of an illusion, so there are really only four true CPU cores available, but it is still very fast nevertheless.  Host Server 2 is the secondary host and provides a secondary Hyper-V host and SQL Server Database.  The main reasons for including this in the farm was (1) I already had the hardware just lying around, and (2) it is a cost-effective way to offload guests on another machine.  During peak times, there can be a lot of activity on the farm and having a secondary host is helpful.

Disk Performance

I decided against reusing several hard drives that I had lying around even if they were only a year old.  I wanted all drives in the arrays to be completely identical.  I chose the Seagate Barracuda ES.2 drives.  The ES moniker designates “Enterprise Storage” and is probably a lot more marketing hype than anything, but I have read good things about them.  I chose them for three main reasons:

  • They are rated at a phenomenal 1.2M hours MTBF.  This essentially means that (on average) these drives will run for 1.2 million hours before they start to fail.
  • They are rated for 100% Duty Cycle (24×7).  This is something that many manufacturers neglect to mention.  While typical desktop drives are always on and spinning, they may not always be actually doing anything.  Manufacturers understand this and forgo the added expense of using higher quality (and higher cost) components.  Server drives on the other hand, may always be active.  Having a high duty cycle rating insures that if the drives are always active, you can still assume the MTBF rating.
  • They come with a 5 year warranty.  Even with the understanding of MTBF and duty cycle, I cannot assume the drives will not fail.  Many drives come with a 5 year warranty, but I wanted to make sure that these did as well.

The drives are a little pricier than most drives, but I felt the cost was justified.  They are all 7,200 RPM and I did consider faster disks that are available.  The cost of 10,000 RPM drives would wreak havoc on my budget and the 15,0000 RPM drives would decimate it.  I did opt to use the Seagate VelociRaptor drives for system disks on both host servers.  Normally I would have recommended a RAID 1 (mirror) configuration for this, but the budget did come into play.  I did however, mitigate some risk by setting up a regular image snapshot of the system.

Jan 2, 2010 – Update:  A friend of mine pointed me to the Samsung drives that carry a 7yr warranty.

RAID Configuration

I have been using RAID-5 arrays for years and was always interested in testing RAID-10.  After some research, it was obvious that RAID-10 was going to be the array for the new farm.   If you are not familiar with RAID-10 (sometimes referred to as RAID-0 + 1), it is essentially a mirrored (RAID-1) striped array (RAID-0).   So you get the performance of striping, plus the added benefit of redundancy of mirroring (there is also some performance increase for certain operations when mirroring).  In my tests, I was seeing almost a 400% increase in read speed and about a 200% increase in write speed when using RAID-10 over RAID-5.

Backup and Recovery

Since the farm will be used by several friends and colleagues with whom I would still like to remain friends with, I wanted to make sure that their project data was safe.  Since I am on a budget and I had a Windows Home Server sitting idle most of the time, I decided to include it in the farm.  I was still paranoid about losing data say if my house burns down, I wanted to also include an offsite backup process.  I have been using Mozy for quite some time and have been pretty happy with it, so it’s included in the solution as well.

Backing up SharePoint can be accomplished in many different ways and I recommend that you have a mixture of STSADM, database, and file system/OS imaging.  Here’s the breakdown of what happens:

Activity Overview Frequency

STSADM

Run from a scheduled task, this handles backing up the entire farm (Content, Configs, and SSP)

Weekly

Database Backups

Run from a SQL Server Maintenance plan, this backs up the actual databases.

Nightly

Windows Server Backup

Handles backing up each Hyper-V guest.  Due to the size of each VM (approximately 128GB), this is the most resource and network intensive backup.   Since we are already backing up the databases and the farm itself, and I can spin up a new VM from a baseline in about 30 minutes, it wasn’t a huge concern that I have daily snapshots.  In reality, not much changes on the OS itself.  The designers are usually focused on SharePoint Designer-based development, and the developers rely on source control for backups

Bi-Weekly

 

The backups are run on each of the respective servers with a Windows Scheduled Task that moves them to the Home Server.  The Home Server provides a very reliable storage 4TB storage array, but if it were to die, we could be in trouble.  With that in mind, I have Mozy routinely backup the STSADM and database backups.  However, due to the size of the VM image backups, it is impractical to continually send these over to Mozy.  While I did an initial push to Mozy with the baseline images (which took about 72 hours), I am relying on the combination of database and STSADM backups for fail-safe.

Virtualization

With the exception of the SQL Server database, everything is virtualized.  As each developer has their own virtualized development environment with their own SharePoint farm, these separate instances allow me to mix and match MOSS 2007 and MSS 2010 environments.  Surprisingly, I only needed to allocate 1GB – 2GB of memory for each instance (excluding the central SQL server).  Several users focus on work with SP Designer and use their own laptops or desktops remotely.  Several developers (including myself) however, spend a lot of time in Visual Studio so those instances typically get 3GB – 4GB.

I built out several baseline VMs for MOSS 2007 and MSS 2010.  Once I installed all the development tools and applications, I used sysprep to clean out the unique identifiers.  Sysprepping them allows me to easily spin up a new VM based on predefined configuration.  With those images saved, creating a new environment for a project or developer is about a 30 minute process:

  • Create a new SQL server instance
  • Create a new Hyper-V guest using a copy of the base image for the VHD
  • Start the VM and configure the System Properties such as machine name (which Sysprep removed)
  • Join it to the domain
  • Run the appropriate SP Configuration Wizard.

VM Management

I manage all of the VMs with System Center Virtual Machine Manager 2008 R2.  Initially I wanted to use its VM Clone functionality to create new instances, but it did not work exactly the way I expected.  So for now, I manually make a copy of the baseline VHD and attach it to the new Hyper-V guest.  As I have two Hyper-V hosts, SCVMM does make managing the instances easier, but I do not use it for much else.

Remote Access

As our development teams are not all physically located in the same place, providing remote access to their environments was needed.  Obviously each developer can access their SharePoint farm through a standard web browser by pointing it their uniquely assigned port and configuring forwarding ports.  Sometimes however, the developer needs to get on the box so Remote Desktop was needed.   By default, Remote Desktop always uses port 3389.  I am using a personal grade firewall that does not provide any advanced features that you might find on an enterprise level Cisco appliance.  It does support Port Forwarding which allows me to redirect outside requests to specific internal machines based on the port number being requested.  Since you there is no easy way to redirect traffic on the same port to different machines, changing the port number that Remote Desktop uses allows me to get around this.   Changing the port is very easy:

  • Start the Registry Editor (RegEdit.exe)
  • Locate the HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Control \ Terminal Server \ WinStations \ RDP-Tcp key
  • Change the PortNumber DWORD value from 3389 to the unique port number that you want to use (make sure it is in decimal)
  • Ensure that any local firewalls (e.g. Windows Firewall) is configured to allow traffic on the new port
  • Restart the server

Once the machine restarts, the machine should now be listening for RDP requests on the new port.  When a developer needs to remote into their machine, they simply specify their port.

Example:
mstsc /v:quartz:3397

I’d like to eventually change this process as it’s not an ideal solution.  I am considering either setting up a Microsoft ISA server or getting an enterprise-level firewall.  If anyone has any suggestions, please leave me a comment.

Development Process

As a long time developer and someone who has focused on the development side of SharePoint, I wanted to insure that the environment would support an Agile process that included a continuous integration (CI) build server.  For years developers have been following this process but I have not really seen it become widely adopted in SharePoint.  If you aren’t familiar with CI, as developers check in their changes, an automated process compiles and deploys the code to a test instance of SharePoint.  This allows you to always know when a recent check-in has broken something.  While this works great for regular projects, due to the nature of SharePoint’s design, it can be a little challenging.

The CI process is managed by CruiseControl.NET and is primarily responsible for monitoring the source control repository.  When it detects new source control commits (check-ins), it executes scripts that will build and deploy them to SharePoint.  Developers can use a monitoring program such as CCTray to stay updated on build progress.  If the latest code doesn’t build or deploy correctly, a notification (a system tray popup in the case of CCTray) informs everyone.  While CI isn’t always needed on every project, it proves to be very helpful when you are working with several developers.  If you are not new to development, you certainly know that developers are not always aware of what others are working on and at times, make a change that breaks someone else’s code.  It is actually pretty slick to easily see the status and activity of the various projects being worked on.  I highly recommend that you look at these free, open source, projects.  Both CC.NET and CCTray have been available for many years and support a wide range of source control systems.  The environment currently uses Subversion for source control but I have been starting to play with VS Team System 2010 Beta 2.  It is not as lightweight as Subversion, but I will probably move off of Subversion in the coming months.

In terms of architecture, a separate VM instance has been dedicated as the CI server.  Right now it just handles the monitoring, building, and deploying of changes, but in the future, I would love to include automated unit testing.

Build Process

Most of the configuration process is pretty straightforward and you can easily find sample configuration scripts but as expected, SharePoint-specific development can be a little tricky.  As I mentioned earlier, several developers focus on design work with Master Pages/Page Layouts and branding with CSS.  They prefer to store their assets in the Style Library of the SharePoint Publishing site.  While having them manually export all of these files to a local directory to commit them to source control is an option, it is tedious and error-prone.  Instead, I created a custom MSBUILD task (in C#) that uses the SharePoint API to grab the latest versions of their assets in the Style Library and commit them to the source control repository.  As all code is deployed using SharePoint Features and Solutions, deploying the assets back to the Style Library is all handled through SharePoint’s Feature Deployment architecture.  I plan on blogging about this process in a future article.

Conclusion

So far the environment is working very well.  I was really surprised by the performance.  I was so disappointed by VMware Workstation (and in all fairness, comparing it with Hyper-V is apples to orange), I was not expecting the new farm to run as efficient as it is.  This is not a big farm by any sense, but at peak times I can have up to eight developers/designers working without any noticeable performance hits.

There were a few hiccups along the way, but things have settled down and the developers seem pretty happy.  While this sounds like a lot of information to digest, it really did not take too long to set up and I probably spent about two weekends getting everything up and running.  I tried to automate as much of this as possible so I do not need to be involved in day-to-day maintenance.  I will post more about the Continuous Integration process in a future article.  I really believe that SharePoint can be used as an application development platform (especially with MSS 2010).  If you have any questions or suggestions, please let me know. I am always interested in tweaking and improving the process and I want to inspire developers to treat SharePoint as a true development platform.

TSAimage Feature Not Found during STSADM import

September 14th, 2009 by tdietz No comments »

Recently while importing a site export into a new SharePoint farm, I ran into the error:

FatalError: Could not find Feature TSAimage.

I did find some references to TSAImage with Google, but nothing specific.   I finally tracked it down to a Feature that gets installed with the Fantastic Forty ApplicationTemplateCore solution.  Apparently I had this installed on the original farm but not on the new one.  Once I installed ApplicationTemplateCore.wsp and deployed it to the new site collection, stsadm happily imported everything.

You can find the ApplicationTemplateCore solution here.

Creating Mega Drop Down Navigation in SharePoint with jQuery – Part 1

August 10th, 2009 by tdietz 11 comments »

This is part one in a three part series for implementing Mega Drop Down navigation in SharePoint 2007.  The topics covered are:

Part 1 – Tables, Unordered Lists, and SharePoint
Part 2 – Custom Control Rendering
Part 3 – Hooking it all up with jQuery

Although the styling and navigation had to be scrubbed, a screen cast of the final solution can be found here:

 

Providing users with rich navigation is the latest trend in web design and the hottest technique today is the Mega Drop Down.  Rather than providing single column navigation, Mega Drop Downs present menu information in a rich, two-dimensional format:

image

As you can see in the above example, the drop down menu provides navigation links in multiple columns (and even has a slick transparency effect).  Sites with Mega Drop Downs are popping up all over the web and usability guru Jakob Nielsen has written a great article describing the benefits.

For most sites, implementing a Mega Drop Down is fairly straightforward and most of the effort focuses around CSS.  If you have ever customized navigation in SharePoint, you are well aware that things are easy enough—but only to a point.

Out-of-the-box, SharePoint uses the SharePoint:AspMenu class that inherits from the standard ASP.NET asp:Menu control.  This control provides the ability to be skinned and can be customized through the use of CSS and ItemTemplate support, but at its core, it still is a <TABLE> based control and isn’t flexible enough to radically change its appearance.

Rather than relying on <TABLE> based navigation, CSS unordered lists <UL> can easily be used to implement navigation.  I want to keep this post focused on SharePoint and not CSS, so if you want to read up on how to create the shell for the Mega Drop Down, I recommend starting with this article,   To summarize the concepts, we essentially create a nested set of unordered lists:

&lt;ul id=’topNav’>
  <li>Employee Services</li>
    <div id='mdd'>
      <h3>Benefits</h3>
       <ul id='subNav'>
        <li>401(k) Information</li>
      </ul>
     </div>
   </li>
</ul>

 

The ‘topNav’ <ul> represents the top navigation that is displayed horizontally.  The <div> is the actual Mega Drop Down itself and contains one or more ‘subNav’ <ul> elements that represent the sub-menu items.  The <h3> tag is used as a category heading so we can group menu items together (more on that in part 2).

CSS is used to style the outer ‘topNav’ so elements are displayed inline.  We also hide the <div> as it should only appear when you hover over an <li> within ‘topNav’.  I applied some extra styling, but so far, navigation should look something like this:

 

image

 

We’ve defined the structure of the menu and navigation, but up until now, it has all been static.  To make this work in SharePoint, we have to provide a method of displaying navigation dynamically.  There are several different methods of replacing the default navigation, and for this solution, I opted to create a custom navigation control.   A great overview of the various options can be found here.  Earlier I mentioned that out of the box, SharePoint uses the asp:Menu control to render menu items.  This is a standard data-bound control that references a DataSource for rendering navigation.  If you look inside default.master with SharePoint Designer, you’ll see how all this works:

 

<SharePoint:AspMenu
  ID="TopNavigationMenu"
   DataSourceID="topSiteMap"
   runat="server"
   ...
/>
<asp:SiteMapDataSource
  ShowStartingNode="False"
   SiteMapProvider="SPNavigationProvider"
   id="topSiteMap"
   runat="server"
  StartingNodeUrl="sid:1002" />

 

The SiteMapDataSource is a standard ASP.NET data source control and is widely used in ASP.NET web development.  If you are not familiar with how navigation works in ASP.NET, MSDN has a great overview.  The important concept to note here is the SiteMapProvider property and the SPNavigationProvider value.  SPNavigationProvider is a class that follows the standard provider model and essentially is the code that retrieves the navigation hierarchy from SharePoint.  SPNavigationProvider handles all the mundane tasks of retrieving the data and security trimming results so users don’t see links to sites that they may not have access to.

As mentioned, we are scrapping the SharePoint:AspMenu control because it renders <TABLE> based navigation and are replacing it with a shiny, new <UL> based navigation control.  Other than being difficult to pronounce (and type), HierarchicalDataBoundControl is the underlying class that asp:Menu uses to retrieve its data.  We are simply bypassing asp:Menu and its bias for <TABLE> layouts and going directly at the data.   Our custom control’s declaration looks something like this

:

public class MegaDropDown : HierarchicalDataBoundControl
{
   protected override void PerformDataBinding()
   protected override void Render(HtmlTextWriter writer)
}

 

The two methods that we need to override are PerformDataBinding() and Render().

The PerformDataBinding method is where we will retrieve our data from the data source (indirectly through SPNavigationProvider) and then (ultimately) parse the hierarchical data that is returned.  Initially, the GetData() method is called to…wait for it…get the data.  In general, these controls can represent many different flavors of data.  Since we are working with hierarchical data, GetData() happily returns a reference to a HierarchicalDataSourceView object that represents a view into the navigation.  A data source view is similar to a database view and has the ability to provide a subset of data based on a query.  Coincidentally, HierarchicalDataSourceView points to a nested list of the navigation data.  For our purpose, there isn’t a need a need to filter results and a simple call to the Select() method will suffice.

Having a reference to the view is great, but what we really want is an enumerable reference to the actual data (remember, the view is just an abstraction of the data).  Having the IHierarchyData interface gives us so many options (well, not really) but since everyone likes enumerable objects, getting to the data is only a two-step process.  Once we call GetHierarchyData(), we only need to invoke IHierarchyData.GetChildren() to access the enumerable data

Surprisingly it is very straightforward and only involves a dozen or so lines of code::

protected override void PerformDataBinding()
{
   base.PerformDataBinding();

   // Do not attempt to bind data if there is no data source set.
   if (!IsBoundUsingDataSourceID && (DataSource == null))
      return;

   HierarchicalDataSourceView view = GetData("");

   if (view == null)
      throw new InvalidOperationException("Cannot get any data.");

   IHierarchicalEnumerable enumerable = view.Select();

   if (enumerable != null)
   {
      IHierarchyData data = enumerable.GetHierarchyData(rootItem);
      IHierarchicalEnumerable newEnum = data.GetChildren();
   }
}


Next up is the HTML rendering and actual control implementation…Part 2 – Custom Control Rendering

Schedule End Date Behavior with MOSS Publishing Pages

July 10th, 2009 by tdietz No comments »

Recently I was helping a client look into an issue with Publishing Pages that were mysteriously being moved to the Draft state.

After some investigation, it turned out to be related to the Scheduling End Date field from the Publishing Page Content Type.  Authors were using this field to prevent pages from showing up in a rollup via the Content Query Web Part.  They would set a date in the future and at that time, the article disappeared from the list.

What’s strange is that when this happened, the page switched back to Draft mode, but no entries of this change where logged in Version History.   Apparently the system does this behind the scenes.

While I guess this is desired behavior, the problem that they were facing is that these pages would no longer appear in search results because of their draft state.  The obvious solution was to create a new field that determines if the item should appear in a rollup and filter it via CQWP.

A simple SPD workflow that waits x number of days before setting this field solved the problem of forcing someone to do all of this manually.

HTTP 503 Service Unavailable error

June 19th, 2009 by tdietz No comments »

I was working on a custom HttpModule for SharePoint that was packaged in a Feature and Solution and (after hundreds of successful redeploys) I started seeing HTTP 503 errors and none of my sites would come up (even after an IISRESET and machine reboot).

After some digging, I noticed that the application pool for the SharePoint web app no longer started automatically when IIS came up.

Simply starting this manually fixed my problem and now the app pool starts automatically.

DIV issues in Webkit?

May 30th, 2009 by tdietz No comments »

I am building a UL based navigation for SharePoint and came across an interesting issue with Safari and Chrome.  My nav looks something like this:

<div id=”navcontainer”>
    <ul id=’navigation’>
        <li class=’nav_sites0′><a href=”><span>Groups1</span></a>
            <div class=’menufooter’/>
        </li>
        <li class=’nav_sites1′><a href=”><span>Groups2</span></a></li>
    </ul>
<div>

For some reason nav_sites1 becomes a child of menufooter, rather than a sibling of nav_sites0.   After some fiddling, I changed:

<div class=’menufooter’/>

to:

<div class=’menufooter’></div>

and everything works fine.   IE7/8 and Firefox seem to be a little more forgiving with DIVs than Safari and Chrome.

A colleague told me that the <element/> notation has been deprecated, but I can’t find anything on this.

Migrated from DasBlog to WordPress

April 13th, 2009 by tdietz No comments »

After using DasBlog for several months, I decided to move my blog account over to WordPress.  DasBlog was nice, but I was frustrated by the lack of features.  I’m still hosting my own blog, but it’s all moved over to WordPress now.

Migrating my posts was somewhat of a hassle, but I did get them all migrated after reading this excellent article.  I did have to manually clean up the bodies of most of my posts, but fortunately (?!) I haven’t been posting that often.

WordPress definitely has a lot more features, not to mention a plethora of updated themes.

Several caveats:

  • You’ll lose all your comments
  • Depending on how you uploaded images and other binaries, you might have to re-upload everything manually
  • Make sure you keep your existing DasBlog instance running until you have moved everything over

Live Mesh Saves Me

March 24th, 2009 by admin No comments »

Had a bad HD crash today and lost the whole system partition (the HD is toast).  Fortunately I was syncing all my critical data files with Live
Mesh
.

After I re-installed the Win7 beta on my spare laptop (about 30 minutes), I installed Live Mesh and re-connected to the

SSRS Presentation Available

March 3rd, 2009 by admin No comments »

You can find slides from the Feb, 2009 West Michigan SharePoint User Group presentation on Microsoft SQL Server Reporting Services and SharePoint here.