Recently I’ve been interested in incorporating more automated testing in my projects. The benefits of having tests are heavily covered in other topics so I won’t waste time explaining why I sat in on a few sessions at this year’s Code Camp NYC focusing on automated testing. Needless to say a few of the sessions was all it took to push me into getting my feet wet.
I decided to start out on a demo project just so I didn’t dirty up any formal projects while I tried out the best approach to implementing the tests. I recently created a project to check out ASP.NET Web API as a progressive download host (which allows for streaming video to iOS devices, allows downloads to be chunked and resumed by download managers, etc.) This seemed like a good candidate for tests since the behavior of the library changes based on the HTTP request headers for a resource.
Some time ago I read an interesting Stack Overflow question asking about streaming videos to iOS devices from ASP.NET MVC. In researching the answer, I learned about parsing HTTP request headers and constructing responses. In general for iOS devices to support playing video a web server needs to understand requests with the RANGE header and respond appropriately with 206 (PartialContent) and a CONTENT-RANGE header which more or less repeats the original request range.
I originally wrote an answer to this effect but also wanted to test it out to make sure it worked in ASP.NET MVC. While successful, it was not particularly elegant. After getting some feedback and requests, I wrote a second version. Surprisingly I still get pings on this from time to time and it made me wonder if something more integrated has shown up in ASP.NET MVC since my original attempts. Until recently I hadn’t had a need for this myself until a couple days ago I got the urge to stream video content from my Windows laptop to my TV and thought this might be a good time to revisit the library. What I wanted was an iOS compliant video server so I could stream videos to my Apple TV via my phone.
I knew I had a working library for ASP.NET MVC that I could have up and running in a few minutes but I was more curious if there had been any advancements in supporting partial range requests more seamlessly. A few brief searches turned up the ByteRangeStreamContent class and the HttpRequestMessage’s RANGE header used in ASP.NET Web API which looked promising.
Over the last few months I’ve been investigating source control solutions to replace our aging solution. After going through a few options we finally settled on TFS 2013. After much research I was able to figure out all the necessary requirements, the best options for getting it set up and running except for one crucial question. I couldn’t find a straight answer that explained how the application talked to remote database server with regards to user accounts. My guess was the TFSService account was in charge of this but I couldn’t get a definitive answer. I was really hoping it wasn’t going to use integrated authentication all the way to the database since that would be a non-starter.
With fingers crossed, I managed to get all the necessary approvals, got a new VM stood up and even got the database team to agree to create a service account that would let TFS manage its database requirements directly (rather than trying to pre-create the tables.)
When the day finally came, we ran into a slight snag. I was attempting to the do the TFS configuration from my account while the database guy watched, ever-ready to type in the service credentials. Unfortunately the database wasn’t being found no matter how correctly those credentials were typed. On a complete guess, we decided to try again while the DB guy was logged in. Sure enough this worked.
Once the configuration was complete, I was able to hit the TFS front-end and the connections to the database from that point on happened through the TFSService account. Neat. The downside is I can’t run the configuration tool to add new collections or change features since my account doesn’t have database rights. The TFSService account only applies to calls through the front-ends.
So, if you decide to install TFS 2013 on premises and to use a remote database managed by another team with very strict permissions, you’re going to need to run the configuration under an account with permissions to the database server. The web front-ends will communicate using the account configured as the TFSService account after that.
There. Weeks of trying to find a definitive answer and I finally just had to try it and hope for the best. Hopefully this saves someone else some time!
Happy controlling of the source,
Posted in TFS
Tagged Configuration, tfs
It’s taking a bit longer to get out Part 2 of creating a user picker in HTML, CSS and jQuery than I’d initially expected. In the interim the User Picker has evolved into a more generic picker useful for other types of data. As such it’s undergone some heavy modification to strip out specific references to users / employees and gained some additional configuration options.
I had almost the entire Part 2 post written and ready the night I published Part 1. I only needed to do a bit of cleanup of the user picker to remove any traces of specific information and get it ready for people to view publicly. With all the changes that have happened since, the post is basically obsolete. It wouldn’t be kind to force people to wait for it to be rewritten before letting you see the current code and begin using it. I’ll try to get the blog post out soon for those interested in the process of creating a jQuery plugin but I have a feeling the source is much more highly desired!
The picker – now called entitypicker – can be found on GitHub. There is also a hastily thrown-together demo page. The demo uses Yahoo services to do YQL queries for city names.
UPDATE: I created a project on GitHub. See the follow-up post.
I was recently tasked with solving a rather unusual problem at my company. It was unusual not in the uniqueness of the request but by how common it would seem to be yet we had no existing solution. While we have created standalone web applications to satisfy internal needs, it appears we never really had cause to build a proper user picker. Most of our solutions that needed one happened to be surfaced through SharePoint and made use of its user search and selection methods. That’s why I was surprised to have 3 applications which needed a picker for internal employees pop up at almost the exact same time.
Of course our first thought was to leverage an autocomplete control. Quite some time ago I’d made use of the jQuery plugin from bassistance.de before one was available as part of jQuery UI. Though the autocomplete was quite nice, the free-form nature of the text was not strict enough for our purposes in picking users.
A few suggestions were tossed around including turning an HTML div into a pseudo-input by listening to keypress events and updating the values. A coworker was tasked with turning out a prototype based on this design and he did a pretty good job. It was functional but it lacked some features people have come to expect from an autocomplete-style control like the ability to use arrow keys to navigate the selections. In addition, the input was a bit buggy and there were some race conditions with rendering suggestions from a remote service while a user was still typing. In order to solve the buggy input, the keypress listeners were replaced by a hidden input control nested inside the div and styled in such a way as to be invisible. We still had to address the usability and race condition.
At this point we had a good idea of what we wanted the control to look like – modeled after GMail’s address picking – and what it needed to do. I ended up gutting the prototype and rewriting it. While it is still an early version and could use some enhancement to make it more flexible, we achieved a solution that is working quite well for us.
Posted in HTML, Web
A coworker was tasked with finding a good client-side grid for displaying and editing data from WCF services. We’ve had a little experience with WCF services and JSON serialization of simple data to generate some custom HTML but until now we hadn’t touched grids. Typically we’d use a 3rd party ASP.NET control inside a web application but lately we’ve been trying to be a little more lean.
I haven’t done any research into grids myself so I’m taking it on faith that jqGrid is alright for us. Right out of the gate there was some trouble getting the JSON data to be in a format the grid appreciated. Since we didn’t fancy the idea of boxing all our services into returning data in this format we had to get the grid to work with our format.
I went from knowing nothing about this grid to having 20 tabs open on various sites trying to find any information I could regarding options for making the grid work with our data. It is shocking how much information is out there but I didn’t find a single source that put it all together. It was through pure persistence I found a reference to jsonReader and then found the properties it expects.
During my time at the World Science Festival this year I had the opportunity to hear a couple presentations on different topics. I enjoyed most but one presentation in particular rubbed me the wrong way. Normally I would just ease over the rough parts and pluck out the gems but in this case I think the rough parts deserve some attention.
The specific presentation was titled “Mentoring Makers: What’s Wrong with DIY.” In his presentation professor Neil Gershenfeld made a few provocative statements. In essence he feels DIY is, at its core, wrong. The DIY culture leads to the reinvention of past mistakes and doesn’t do enough to mentor young people. As evidence he cites the design of a popular 3D printer which uses wood and screws to compose the body. This design, he says, is faulty. Over time the wood dries out and the screws or bolts loosen. It’s a flaw that has been solved by large firm manufacturers long ago.
His disdain for 3D printers in general was quite evident and extended far beyond the materials used in their construction. His entire presentation was punctuated by the repeated refrain “I hate 3D printers!” He makes a good point that 3D printers aren’t the end-all of home custom fabrication. CNC machines, laser cutters and even basic lathes are all very powerful tools that are more efficient for some designs. A 3D printer, he states, should only be used for designs which are too complex to be made any other way.
While his point is valid on the face of efficiency, a 3D printer allows for a wider array of shapes and objects to be produced at a lower cost and smaller space than an entire workshop of machines. To me the usefulness of the machine is self-evident and shouldn’t be discounted on the basis that it’s not the best tool for all jobs.