Kristian Kristensen’s Blog


Running FreeSwitch on Windows Azure Virtual Machines

Posted in Azure,FreeSwitch,Microsoft,Misc,VoIP by Kristian Kristensen on the June 22nd, 2012

After the announcement that Windows Azure will support hosting real Virtual Machine images I’ve wanted to test it out and see if I could run FreeSwitch on it. FreeSwitch is an open source soft switch for telephony, kind of like Asterisk.
The short conclusion is that yes you can run it, but it’s severely limited since Azure doesn’t allow more than 25 port pairs to be forwarded per VM. Read ahead for what it took to get it up and running, plus some screenshots.

After you get access to the preview Azure portal, setting up the VM is pretty easy. There’s a gallery of OS images to choose from (4 in total) of which I chose Ubuntu 12.04 LTS. My VM was setup in “East US” and I chose an Extra-Small instance, since it was just for testing. During the setup you can choose to add a user account with a password or upload your SSH key. After the box boots up, you can log into it using the Virtual IP (VIP) that Azure assigns, or the DNS alias created during setup. So far so good. You ssh in, and it’s a virtual machine like any other.

The new Azure portal is pretty nifty and much better compared to it’s previous Silverlight incarnation. It’s all HTML and all the data is retrieved via a REST API that you can access yourself as well. That means that nothing is hidden, and you could build your own Azure Dashboard if you wanted to. The default one shows the state of the machine along with some pretty usage graphs.

Azure VM FreeSwitch Dashboard

Getting FreeSwitch to run on this new virtual machine requires the normal source check out dance. The following commands will do fine.
Prerequisites for Ubuntu 12.04 LTS (from the Wiki):

sudo apt-get install git-core build-essential autoconf automake libtool libncurses5 libncurses5-dev gawk libjpeg-dev zlib1g-dev pkg-config libssl-dev
sudo update-alternatives --set awk /usr/bin/gawk

Then:

cd /usr/local/src
git clone git://git.freeswitch.org/freeswitch.git

cd /usr/local/src/freeswitch
./bootstrap.sh
./configure
make && make install
make all cd-sounds-install cd-moh-install

After that you can start up FreeSwitch by going to the bin directory in /usr/local/freeswitch/bin and run “./freeswitch”. This will run the out of the box configuration which has users and a dialplan set up. It’s good enough for testing.

FreeSwitch will try and auto detect the NAT settings and it’s own IP. It will fail because the VM is locked down on a firewall level. Hence it’ll only find it’s own internal IP in the 10.*.*.* range. This of course won’t help us when we’re trying to register a user agent to it. Next step is therefore to setup some endpoint mappings.

We want UDP traffic on port 5060 to be forwarded to our VM, as well as some RTP ports. Looking through the documentation for the command line tools and looking at the portal, I had first thought that you couldn’t automatically forward port ranges. That sucks when you want to forward the entire RTP UDP space. I thought (naively?) that a simple hack would just be to loop through all the ports and forward them one by one. Alas that doesn’t work. There is a limitation to the amount of endpoints mappings one can create for an Azure VM. That limit is 25 port pairs. That throws a big wrench in the plan of hosting a soft switch on Azure VM’s. However, we can still get the proof of concept going by forwarding a couple of RTP ports and then limiting FreeSwitch to only use these.

Azure VM FreeSwitch Endpoint Mappings

We also need to tell FreeSwitch to only use this mapped port range. So open up conf/autoload_configs/switch.conf.xml and set rtp-start-port and rtp-end-port to the start and end of the mapped port range.
Then we’ll update the external_rtp_ip and external_sip_ip to be the Virtual IP assigned by Azure. Open vars.xml and make the change. Last you want to update the internal sip profile to use the external sip and rtp ip’s instead of what FreeSwitch tries to guess by using auto nat, which is the default. So open up conf/sip_profiles/internal.xml and replace:

<param name="ext-rtp-ip" value="auto-nat"/>
<param name="ext-sip-ip" value="auto-nat"/>

with:

<param name="ext-rtp-ip" value="$${external_rtp_ip}"/>
<param name="ext-sip-ip" value="$${external_sip_ip}"/>

Then recycle FreeSwitch. You should now be able to register to your Virtual IP or DNS alias by Azure using one of the test accounts in the default config. Once your softphone registers try and call another extension on the box or call 9664 for some lovely Hold Music.

Most of the setup needed I found on the FreeSwitch EC2 wiki page.

This is the first preview release of Virtual Machines running on Windows Azure. I hope that being able to map port ranges and definitely having more than 25 endpoint mappings is something that’ll come soon to Azure. From the responses on the forum it seems like it’s on the roadmap, although without any indication of time frame. Fingers crossed for sooner rather than later.

  • If you like my writing you should subscribe to my RSS feed.

    Issues with MySQL and ODBC on Centos 5

    Posted in Code by Kristian Kristensen on the April 5th, 2012

    I run a server with Centos 5.6. For a number of reasons I need to have a newer version of MySQL running on it than what Centos 5 comes with stock. So I used IUS to install MySQL 5.1. Great. It works, I have a somewhat recent version of MySQL. Problem was I’m running FreeSwitch (FS) on this box as well. And I’ve linked up FS to MySQL via ODBC. For some reason that connection went wonkers at some point during an upgrade (I suspect). So the problem was that FS was in a limbo mode where it couldn’t start because MySQL ODBC didn’t work.

    The error I got was something along the lines of:

    isql: relocation error: /usr/lib64/libmyodbc3.so: symbol strmov, version libmysqlclient_15 not defined in file libmysqlclient.so.15 with link time reference

    Starting FreeSwitch gave me the same error.

    You can use isql to test that the connection is indeed there, so I had a good baseline test for figuring out when this would actually work again. Googling the error was of no help. Upgrading packages via Yum back and forth was no help either. The solution for me ended up being very simple. Install a new version of the MySQL ODBC Connector and update my ODBC DSN to use this new driver.

    1. Get the newer ODBC package for your platform here.
    2. Install it using “rpm -i mysql-connector-odbc-5.1.10-1.rhel5.x86_64.rpm” – that was the version I used.
      The RPM will automatically add the new driver to /etc/odbcinst.ini, so you need to update your /etc/odbc.ini to use this new driver instead.

    /etc/odbcinst.ini

    [MySQL]
    Description = ODBC for MySQL
    Driver = /usr/lib64/libmyodbc3.so
    Setup = /usr/lib/libodbcmyS.so
    FileUsage = 1
    UsageCount = 2

    [MySQL ODBC 5.1 Driver]
    Driver = /usr/lib64/libmyodbc5.so
    UsageCount = 1

    /etc/odbc.ini

    [MyDSN]
    Driver = MySQL ODBC 5.1 Driver

    My old DSN used “MySQL” as the driver but with the new package installed I had to change that to “MySQL ODBC 5.1 Driver”. That’s of course re-nameable.

    After this running isql gave me a connection to the database and everything was back to normal. Posting it here in case someone has the same issue and can’t fidn a solution.

  • If you like my writing you should subscribe to my RSS feed.

    Sencha Touch Cookbook Published

    Posted in Book,Code,Misc,Sencha Touch by Kristian Kristensen on the February 18th, 2012

    During the fall of 2011 I was a technical reviewer of the now published book on Sencha Touch by Packt Publishing. The books goes through a number of different scenarios for building apps with Sencha Touch.

    Sencha Touch Cookbook

    Reviewing a book has been an interesting experience. The process is fairly simple. You receive a number of chapters on a fixed schedule. You then have to comment on it, suggest improvements, fact check what’s written as well as make sure the code is reasonable. This then has to be sent back to the editor.
    The bonus of being a reviewer is that you get your name in the book. So if you go to the Amazon Look Inside feature and flip through the first few pages you’ll find a little blurb about me.

    You can find and buy the book online at Packt Publishing or on Amazon.

  • If you like my writing you should subscribe to my RSS feed.

    Running Erlang Webmachine on Heroku

    Posted in Code,Erlang,Ruby by Kristian Kristensen on the December 16th, 2011

    When Heroku released their new stack Cedar they opened the door for a whole new set of components to run on their platform. To support running other components on Heroku you need a build pack that tells Heroku how to build and run your app. Heroku’s published a build pack for Erlang as well as a demo app. This demo app uses straight up Mochiweb as a web server.

    MochiWeb is an Erlang library for building lightweight HTTP servers.

    Mochiweb Github Repo

    Webmachine runs on top of Mochiweb and is

    A REST-based system for building web applications.

    Webmachine Github Repo

    In this blog post I’ll show how to get a Webmachine app running on Heroku.

    Let’s get to it

    So here are the steps to get a vanilla Webmachine app up and running on Heroku.

    $ git clone git://github.com/basho/webmachine
    $ cd webmachine
    $ make
    $ ./scripts/new_webmachine.sh my-erlang-app /tmp
    $ cd /tmp/my-erlang-app
    $ make
    

    You can now run “./start.sh” and open your browser on http://localhost:8000 and see your new awesome Hello World Webmachine app. Now we want to deploy it to Heroku.
    Make sure you have Ruby and the Heroku gem installed. If not run:

    $ gem install heroku
    

    Then setup our app:

    $ git init
    $ heroku create my-erlang-app -s cedar
    $ heroku config:add BUILDPACK_URL=http://github.com/heroku/heroku-buildpack-erlang.git
    

    This initializes a new Git repository in our app directory, creates the heroku app. The final line setups the build pack that cedar should use when deploying the app.

    Let’s add the source files to the Git repo and start hammering out some code.

    $ git add Makefile README rebar rebar.config start.sh src/* priv/dispatch.conf
    

    Next we want to update our rebar.config:

    %%-*- mode: erlang -*-
    {sub_dirs, ["rel"]}.
    {deps_dir, ["deps"]}.
    {erl_opts, [debug_info]}.
    
    {deps, [{webmachine, "1.9.*", {git, "git://github.com/basho/webmachine", "HEAD"}}]}.
    

    Create the Procfile which Foreman will use to control our app:

    web: erl -pa ebin deps/*/ebin -noshell -boot start_sasl -s reloader -s my-erlang-app
    

    You know have the major components in place for Heroku deployment. If you want to test it out run:

    $ ./rebar get-deps compile
    $ foreman start
    

    Of course this requires that you’ve Foreman in your path. If not install Ruby and run “gem install foreman”. If all goes well your app will start up and you’ll be able to point your browser to http://localhost:8000 and see the output.

    Before we can push to Heroku we need to update the application start up code for the generated Webmachine app. When Cedar attempts to start your application it’ll define the port on which your app should listen/bind. Hence we need to read out this value and tell Webmachine to use it. Also we want to update the logging, basically turning it off (more about this later) and we want to just bind to the default catch all ip of 0.0.0.0.

    Open up my_erlang_app_sup.erl in the src/ directory.
    Change the init method so it looks like this:

    init([]) ->
         {ok, Dispatch} = file:consult(filename:join(
                              [filename:dirname(code:which(?MODULE)),
                              "..", "priv", "dispatch.conf"])),
    
        Port = list_to_integer(os:getenv("PORT")),
        io:format("start web server on port ~p~n", [Port]),
        WebConfig = [
                     {ip, "0.0.0.0"},
                     {port, Port},
    %                 {log_dir, "priv/log"},
                     {dispatch, Dispatch}],
        Web = {webmachine_mochiweb,
               {webmachine_mochiweb, start, [WebConfig]},
               permanent, 5000, worker, dynamic},
        Processes = [Web],
        {ok, { {one_for_one, 10, 10}, Processes} }.
    

    This reads out the port number that Cedar has assigned to our app in line 6 and passes it to the Webmachine config. Also it binds to the right IP in line 9 and disables the log dir in line 11.

    Make sure it compiles by running the rebar command again:

    $ ./rebar get-deps compile
    

    Before we push to Heroku you might want to change the output message of your app. Open “src/my_erlang_app_ressource.erl” and change the “to_html” function.

    Now do the add and commit dance to git:

    git add <input-changed-file-list-here>
    git commit -m "initial commit before push to Heroku"
    

    Next run

    $ git push heroku master
    

    Point your browser to http://my-erlang-app.heroku.com and you should see your output message as defined in “src/my_erlang_app_ressource.erl”.

    To see what’s going on when Cedar boots up your app run “heroku logs”. This will spit out the same output you see when you run foreman locally. This is a great (and the) way to debug why your app isn’t starting up.

    I’ve pushed my repository to Gibhub and deployed the app to Heroku. It’s called Erloku and you can check it out here

    Next steps

    Here are a couple of things that I’d like to continue to work on.

    • By default Webmachine comes with a logger that outputs to tiles. Since this doesn’t go well with Heroku which expects and redirects output from Standard Output and Standard Error to its logs. Therefore it would be nice to implement a logger for Webmachine that outputs to StdOut and StdErr. Doing this shouldn’t be too hard.
    • Get two dynos running an Erlang Node to talk with each other. Cedar probably has walls in place that won’t allow this, but if possible it would be mighty cool.
  • If you like my writing you should subscribe to my RSS feed.

    ErlChat – A Simple Chat Server Written In Chicago Boss, A Web Framework For Erlang

    Posted in Code,Erlang by Kristian Kristensen on the December 13th, 2011

    Recently I’ve been exploring Erlang and its web frameworks. As part of this exploration I found Chicago Boss. This blog post is a pointer into the simple chat server I built using Erlang and Chicago Boss.

    Jordan Orelli and Seth Murphy from Hacker School built Chatify as a demo application for Brubeck:

    Brubeck is a flexible Python web framework that aims to make the process of building scalable web services easy.

    Chatify is a simple chat application built with Brubeck as the backend and HTML and Javascript as the front end.
    I decided to reimplement the backend in Chicago Boss and reuse the front-end. The result is Erlchat.

    Here’s the main page of Erlchat, where you enter your nickname before logging into the chat:

    Click to View in Fullsize - ErlChat Main Page

    Bert and Ernie chat’s away:

    Click to View in Fullsize - ErlChat Chat Screen 1

    They’re loving it!

    Click to View in Fullsize - ErlChat Chat Screen 2

    The code is really simple. Mostly because the required parts for building a chat server is built into Chicago Boss in the form of a message queue abstraction. However, the reason it works is because of Erlangs ability to scale out. Each call to retrieve messages is a long polling HTTP call, and hence blocks a connection. Since Erlang scales to many thousands of processes and Chicago Boss takes advantage of that, it really isn’t a problem.

    The following function gets called by the client when he wishes to retrieve the messages that have occurred in Channel since his last retrieveal (LastTimestamp). It blocks on the call to boss_mq:pull.

    receive_chat('GET', [Channel, LastTimestamp]) ->
        {ok, Timestamp, Messages} = boss_mq:pull(Channel, list_to_integer(LastTimestamp)),
        {json, [{timestamp, Timestamp}, {messages, Messages}]}.
    

    Sending a message is a simple HTTP POST that creates a new message and pushes it on to the Channel message queue. It uses the utility method seen below.

    send_message('POST', [Channel]) ->
        create_and_push_message(Channel, list_to_binary(Req:post_param("message")), Req:post_param("nickname")),
        {output, "ok"}.
    
    create_and_push_message(Channel, Message, Username) ->
        NewMessage = message:new(id, Message, Username, erlang:localtime()),
        boss_mq:push(Channel, NewMessage).
    

    Check out the README as well as the source. It’s all up on Github.

  • If you like my writing you should subscribe to my RSS feed.

    Giving EM-Smsified Some Server Love

    Posted in Code,Ruby by Kristian Kristensen on the November 23rd, 2011

    I just pushed the next release of my EM-SMSified gem to Github and Rubygems. This release (0.3.0) adds an EventMachine HTTP server to make it easy to react to SMSified callbacks.

    Installing is easy as always:

    gem install em-smsified

    Using it is equally easy. Here’s an example of a “pong” server that sends a pong back to any received text message:

    require 'rubygems'
    require 'yaml'
    require 'em-smsified'
    require 'eventmachine'
    require 'evma_httpserver'
    
    smsified = EventMachine::Smsified::OneAPI.new('username', 'password')
    
    EM.run do
      Signal.trap("INT") { EM.stop }
      Signal.trap("TRAP") { EM.stop }
    
        puts "Hit CTRL-C to stop"
        puts "=================="
        puts "Server started at " + Time.now.to_s
    
        puts "Starting incoming SMSified callback server"
    
        EM.start_server '0.0.0.0', 8080, EventMachine::Smsified::Server do |s|
        s.on_incoming_message do |msg|
          puts "Message received " + Time.now.to_s
          puts "#{msg.sender_address} says '#{msg.message}' to #{msg.destination_address}"
          smsified.send_sms( :message        => 'Pong',
                             :address        => msg.sender_address,
                             :sender_address => msg.destination_address) do |result|
            puts "Pong sent " + Time.now.to_s
          end
        end
      end
    end
    

    A more elaborate example is up on Github (examples/pong_server.rb).

    The server supports incoming messages on which you need to set a subscription (SMSified – Receiving Messages) and delivery notifications. The latter is set up when you send an sms message by adding the :notify_url parameter.

    Use cases

    Having a server that’s easy to use from EventMachine makes it easy to implement more advanced text message scenarios:

    • You could couple this with the em-websocket gem and add easy websocket callbacks from received text messages.

    Source on Github as usual, and gem on Rubygems.

  • If you like my writing you should subscribe to my RSS feed.

    Putting EventMachine In the SMSified Gem

    Posted in Code,Ruby,Tropo by Kristian Kristensen on the November 17th, 2011

    I wanted to utilize EventMachine for something and real and since I’ve been tinkering with telephony stuff recently I thought something that sends text messages might be a good candidate. Instead of rewriting everything from scratch I started with the SMSified Ruby gem instead. SMSified is a service by Tropo that makes it really easy to send text messages. Since the service is still in beta sending text messages is free. Pretty neat. SMSified has a Ruby gem that comes with a test suite. Hence I thought it would be a good starting point.

    Installing the new gem is easy

    gem install em-smsified
    

    Here’s some example code showing how to send an SMS via SMSified:

    require 'rubygems'
    require 'eventmachine'
    require 'em-smsified'
    
    oneapi = EventMachine::Smsified::OneAPI.new(:username => 'user', :password => 'password')
    
    EM.run do
      oneapi.send_sms(:address => '14155551212', :message => 'Hi there!', :sender_address => '13035551212') do |result|
        puts result.inspect
      end
    end
    

    The original gem uses HTTParty to do HTTP requests. To mock these in the spec suite the gem used FakeWeb. FakeWeb doesn’t work with EventMachine and therefore my first step was to replace FakeWeb with WebMock, which works with a number of Ruby HTTP frameworks. After that I DRY’ed up the code a bit to contain where HTTP requests were being made. Then I added EventMachine via the EM-HTTP-Request gem. To EM’ify the new library I had to modify the original interface to take an anonymous block. This block gets called when the request to SMSified returns. This is where you can check what was returned and perform any updates. This is shown in the code sample above.

    There are more examples in the source tree and there’s also some YARD documentation.

    Jason Goecke and John Dyer wrote the original SMSified gem making my job so much easier.

    em-smsified on RubyGems.
    Source on Github.

  • If you like my writing you should subscribe to my RSS feed.

    TweetHose Released as a Gem

    Posted in Code,Ruby by Kristian Kristensen on the October 22nd, 2011

    I’ve just released a gem called TweetHose to RubyGems. Here’s the short description of what it is:

    TweetHose lets you easily generate a daemon that listens to the Twitter firehose. When keywords you’re interested in appears, you can set up a callback. Should make it easy to create that Justin Bieber tracking app you’ve always wanted.

    Install it via a quick:

    gem install tweethose

    Source is up on Github.

  • If you like my writing you should subscribe to my RSS feed.

    The Simple Way to Generate One File MSDN Style Documentation for Your .NET Projects

    Posted in Code,Microsoft by Kristian Kristensen on the June 10th, 2011

    I was recently tasked with generating code documentation. The last time I had to do that I used NDoc and the XML Comments that the C# compiler (csc.exe) spits out. Turns out that NDoc is no longer, or at least it’s not being maintained. Instead the new kid on the block is Sandcastle.
    It might be my memory failing, but I seem to remember that setting up NDoc to generate MSDN style documentation was pretty easy. My initial foray into using Sandcastle was not. This post describes a super simple way to generate MSDN style documentation that’s contained in a single CHM file.

    We’ll need some software. So go ahead and download and install the following:

    Optional:

    • GhostDoc – http://submain.com/products/ghostdoc.aspx
      Not required, but it makes it super easy to generate XML documentation from within Visual Studio. It’ll even auto generate some of the text. The free version is fine, but obviously the paid version includes more features.

    With all of the software installed and in place, it’s time to generate some documentation.

    • Open Sandcastle Help File Builder GUI and start a new project.
    • Right click Documentation Sources on the right and select “Add Documentation Source”.
    • Find and select your solution file (*.sln).
    • Select “HtmlHelp1″ under HelpFileFormat under the “Build”-category.
    • Fill out whatever other properties you want. HelpTitle and HtmlHelpName are obviously good for putting in the name of your project. The “Show Missing Tags”-category is good for fine tuning what’s added if you haven’t explicityly documented everything.
    • Hit CTRL+SHIFT+B or select “Build Project” from the “Documentation”-menu.
    • Wait.
    • You should now have a .chm file in your output folder.

    Remember if your distribute the CHM file by putting it on the internet and allowing people to download that they need to Unblock the file first, before opening it. Otherwise all of the links inside will fail. You can see my post on X.509 Certificates and Downloads Via Internet Explorer for a description of the troubles that can occur when downloading files and they’re marked as being unsafe.

  • If you like my writing you should subscribe to my RSS feed.

    JSON Proxy Using IIS Reverse Proxy for Fun and Profit

    Posted in Code,PhoneGap,Sencha Touch by Kristian Kristensen on the June 7th, 2011

    Developing mobile applications using Sencha Touch that will be packaged up using PhoneGap to run as a “native” app on a phone can be a bit of a pain. Especially if you’re accessing some kind of API. Because of restrictions in the browser, you’re not allowed to do cross site requests, meaning it’s difficult to call external API’s during development. You’re likely to run your new shiny mobile app from a file on your hard drive or served up by a locally running web server. But the API’s you need to access are probably running on a another external server. Hitting that API will give you a sad message from Chrome, because of the “Same Origin Policy”. Safari will just return nothing, and ignore you. IE9 and Firefox are not even options, because they don’t run WebKit. You can of course do all of your development in PhoneGap, deploying to an emulator or a real device and always test on that. But it adds quite a bit of time to the feedback loop and for prototyping it’s especially nice to have the fast feedback loop afforded by the browser, the console and the inspector. So how do we get around this annoyance. One way is to use JSON-P and the ScriptTagProxy. Another is to utilize a reverse proxy server, which is what I’ll describe in this post.

    I’m developing on Windows 7, so this is a description on how to set it up using IIS. This guide on the IIS.net website explains things in more detail. If you’re running on Mac or Linux you could setup nginx, Apache or another web server. Most web server software include proxy rewrite and reverse proxy capabilities.

    Before we get started here’s a quick and simple diagram of what we’re trying to achieve:

    Browser < => Local Web Server (IIS) Proxy To => External Server with API
    

    First you’ll want to install the Application Request Routing module. This module works with the URL Rewrite module to proxy requests back and forth between your web server and some other back end server. Next step is to enable the ARR module on your IIS Root.

    Then on your web site that hosts your mobile application code (basically just a virtual directory to your web root of your Sencha Touch application) you want to configure the URL Rewrite module. There are a couple of options for doing this, one is through the IIS Management GUI, the other is by adding the appropriate configuration in your web.config file. I’ll list the configuration settings in the following, but you can just as well set this up using the GUI and the guide. Another options is to copy in the following configuration settings, and then tinker with it using the GUI afterwards. The IIS Manager GUI is basically a nice way of manipulating web.config files.

    If your virtual directory doesn’t have a web.config (it probably doesn’t since it’s serving up plain HTML and JS) create a new file called web.config and fill in the following:

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
    </configuration>
    

    Then add the following in between the configuration-tags, so your entire file looks like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
    <system.webServer>
    <rewrite>
    <rules>
    <rule name="Proxy To External server" stopProcessing="true">
        <match url="^services/(.*)" />
        <conditions>
        </conditions>
        <action type="Rewrite" url="http://my-external-server.com/{R:0}" logRewrittenUrl="false" />
    </rule>
    </rules>
       <outboundRules>
       <rule name="Add Application prefix" preCondition="IsHTML">
            <match filterByTags="A" pattern="^/(.*)" />
            <conditions>
                 <add input="{URL}" pattern="^/(appName)/.*" />
            </conditions>
            <action type="Rewrite" value="/{C:1}/{R:1}" />
       </rule>
       <preConditions>
       <preCondition name="IsHTML">
            <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" />
       </preCondition>
       </preConditions>
     </outboundRules>
    </rewrite>
    <urlCompression doStaticCompression="false" />
    </system.webServer>
    </configuration>
    

    This configuration says to rewrite all requests coming into the virtual directory at http://localhost/appName/services/ to http://my-external-server.com/services/. The outbound rule looks at the returned HTML and rewrites that too, so links and hrefs will continue to work. If you don’t do this step, links won’t be rewritten and when you follow them you’ll get an error because the links point to the server you’re trying to mask.

    With this in place you can configure your Sencha Touch data classes to use http://localhost/appName/services/ as the server prefix, and service calls will be proxied back and forth to your external data server. To make it easier to transition from development to production, wrap the server prefix in a configuration object.

    I’ve found this proces to be of tremendous value when developing. It’s so much easier to debug things when they run in the browser and you can use the full browser stack.

    One thing I’ve found is that sometimes IIS will barf and throw an internal server error related to request compression and GZip’ping. I’ve disabled Static Compression in the hope that that would help, which it does. The error still shows up though. I’ve found that a quick “iisreset” does the trick and puts everything back on the track. This minor annoyance is worth it compared to the added benefit of running everything in the browser and skipping an emulator.

  • If you like my writing you should subscribe to my RSS feed.

    Next Page »