Monthly Archives: November 2011
Getting Started with the Microsoft Private Cloud :
Microsoft Guidance on getting started with a private “cloud” for your own business.
A group we love.. http://www.gomobilemichigan.org/ Mobile Technology Association of Michigan, MTAM is now accepting members and providing great benefits to those members!
By Don Burnett
What is Infer.net and how is it useful ?
Computers make decisions for us every day, whether it’s using a search engine such as Microsoft’s Bing, or trying to show us advertising on a web site, that may be relevant to our interests. Behind what we see are powerful decision making programs that allow the computer to be “smart” about choices being made (in other words: less wrong). They are not perfect and “artificial intelligence” (also known as A.I.) has been around for a very long time. These are usually hidden away under computer science topics such as “cognitive sciences and machine intelligence”.
My first encounter with such a system was in the 1980s and it was called an “Expert System” at the time and it was named Magellan. It was being done by a local Ann Arbor based company called Emerald Intelligence which went on to having much industry success. Over the years technology technology has improved as the internet has exploded. At the heart of these systems there is something called an “Inference Engine”.
According to Wikipedia:
“an inference engine is a computer program that tries to derive answers from a knowledge base. It is the “brain” that expert systems use to reason about the information in the knowledge base for the ultimate purpose of formulating new conclusions. Inference engines are considered to be a special case of reasoning engines, which can use more general methods of reasoning.”
Machine Intelligence at Microsoft Research
As we all know there can be many applications of such an engine or framework for problem solving. The folks at Microsoft Research have been working on such an engine that you can use today for non-commercial projects.. It’s called Infer.NET.
“Infer.NET is a framework for running Bayesian inference in graphical models. It can also be used for probabilistic programming.
You can use Infer.NET to solve many different kinds of machine learning problems, from standard problems like classification or clustering through to customized solutions to domain-specific problems. Infer.NET has been used in a wide variety of domains including information retrieval, bioinformatics, epidemiology, vision, and many others. “
Let’s take a look at how Infer.NET works and what it does for you..
The user creates a model definition using the API for modeling which specifies a set of inference queries relating to the model. The user then passes the model definition and inference queries to the model compiler, which creates the source code needed to perform those queries on the model, using the specified inference algorithm. Source code may be written to a file and used directly if you need to do so.
The source code is compiled to create a compiled algorithm. This can be manually executed to get refined control of how inference is execute or performed by the Infer method. By passing the framework a set of observed values (arrays of data), the inference engine executes the compiled algorithm, to produce the marginal distributions requested in your query. This can be iterated/repeated for different settings of the observed values without recompiling it.
From the documentation:
What can we use this for and how is it useful ?
Problems that Infer.NET can solve for you.. The “Click Model Example”
One of the samples provided by Microsoft Research for instance allows us to glean what human relevance to be reconciled with document click counts. These example models allow us the calibration of human judgment data against click data using query/document pairs for which we have both observations.
This can be used to identify data for which click data and human judgment data are inconsistent and and need clean up for a ranking model to be useful. It could also use the predicted labels or score and supplement the human judgment training data.
A user submits a query to a search engine, the search engine returns a list of document hyperlinks to the user, along with a title and query-related snippet extracted from the document. The user looks at the list, and based on title and snippet, decides whether to click on a document in the list or whether to pass over it. These decisions are recorded in click log. The decision of a user to click or not click on a document in the list gives an indication as to whether the document is relevant or not.
The relevance of a document to a given query can also be determined by human judgments.
Judgments are usually in the form of a set of labels with associated numeric values.
- Not Relevant
- Possibly Relevant
Building a successful search engine requires the collection of many human relevance judgments to create a valid document ranking system. These tend to be much more expensive to collect and more valuable than the logs themselves in the grand scheme of things.
Code for Walk Through:
Output from Infer.NET
How to solve this problem?
In this example two models are built to solve the problem. These models are the same except that the second one uses shared variables. The two models should give identical results provided the inference converges.
Building the First Model
In the first model of the example provided, each click or non-click provides evidence about the relevance of the query/document pair. The more examinations performed the more believable the evidence is.
”We could think of the set of click/non-click events as the outcome of a binomial experiment – the probability of observing m clicks given N examinations is given by the binomial distribution Bin(m|N, m) where m is a parameter that we need to infer.”
Infer.NET does not provide built-in support for binomial distributions.
“We could add binomials in ourselves, but instead we consider each click/non-click event as outcomes of individual Bernoulli experiments, and include each click or non-click as an individual observed variable. However, this would create a large number of variables for each query/document pair, and might be impractical in a very large scale application.?..”
”Instead, we adopt a practical approach where the posterior for m is calculated outside the model. This posterior can be analytically and simply calculated as a beta distribution. We then use moment-matching to project this distribution onto a Gaussian distribution (the reason for this is that we will later be introducing a Gaussian score variable corresponding to this observation). All of this can be very simply done using the Infer.NET class libraries. For simplicity, we just assume for now that the observation distributions are in a single array, though this will change later.”
Understanding the Second Model and How it differs from the first
The second model takes care of the plumbing needed for sharing information between models. The SharedVariable class is a convenient wrapper class used to specify the variables that are shared between the models. Let’s now skip ahead to look at the differences between model one and model two..
”Here we implement the same model as in click model 1 but with shared variables. There are a number of reasons why one might want to use shared variables including memory problems, parallelization, and more control over the schedule which might be necessary if there are convergence problems. Infer.NET provides a SharedVariable class and a Model class which ensure that the correct messages get marshaled between the different models. This model is available as Model2 in the example code. It mirrors the Model1 code except for the following:
- SharedVariable objects are created in place of Variable objects for all variables that we want to infer; these are initialised with the priors.
- Model code must be changed to refer to the instance of the SharedVariable for the current chunk.
- The data is divided into identically sized chunks.
- We explicitly loop over chunks, and do inference on each chunk. We need to loop over all chunks several times, checking marginals between each pass to test for convergence.
- For each chunk, we use SharedVariable and Model class methods to obtain the variables for each sub model, and to perform inference on these variables, respectively.”
Using the models in Prediction
The training models used above were structured according to label class. For prediction there is usually no label information. Many of the components of the model are the same as for training.
Inferred variables from the trained model are used as the priors for the prediction model. Data is not partitioned according to label because there are no labels so there are no loop over labels. The lower and upper bound thresholds are set to negative infinity, and positive infinity rather than 0.0 and 1.0 – The label probabilities that will be output by the model sum to 1.0 An array of bool variables is set up The marginal distributions of these as Bernoulli distributions will give the probability of each label.
Running the Prediction from the Models
You will notice that the click data is provided as arrays and examination counts. Click data is converted into Gaussian observations the same way as the training model uses (though not by label). This distribution array is set as the value of the observationDistrib parameter. The marginals are then requested from the inference engine. From the results we can determine that the more confident the model is of the labeling.
What Infer.NET does well..
Infer.NET provides the .NET programmer with:
- Powerful and Flexible Model Construction
The Infer.NET API modeling API makes converting a conceptual model into code simply and effectively. The API can be used to implement a wide range of models.
Models supported include: Bayes point machine, latent Dirichlet allocation, factor analysis, and principal component analysis in only a few lines of code.
- Scalable and Composable Models
The Infer.NET modeling API is composable. You can implement complex conceptual models from building blocks. You don’t have to implement the entire model at once. You can start with a simplified conceptual model, which captures the basic features. You can then scale up the model and the data set in stages until you have a fully-implemented model that can process real data sets. You can also scale up these models computationally, starting with a small data set, scale up to handle much larger amounts of data, including using parallelized computation.
- Built-in Inference Engine
Infer.NET includes an inference engine that allows for the computing of posteriors using Bayesian inference and numerical analysis . With Infer.NET, your application constructs a model, observes one or more variables, queries the inference engine for posteriors. The query is done in only a single line of code. The inference engine does the heavy lifting.
- Separation of Model from Inference
Infer.NET gets around the problem of no clear distinction between the model and the inference algorithm.
Infer.NET maintains a clear distinction between model and inference. The model encodes basic prior knowledge. An Infer.NET model is typically confined to a single relatively small block of code. The model is often encapsulated in a separate class, so that you can use the same model for different queries.
A separate model is straightforward to understand and modify, and is much more resistant to inconsistencies. Inconsistencies that creep in will be caught by the inference engine. The inference engine handles the computations. You can change the model without touching the inference engine, and you can change the inference algorithm without touching the model.
Infer.net is limited to relatively simple models. There can be difficulty in changing the model. This can easier to introduce inconsistencies to the model. Infer.NET limits you to a particular inference algorithm.
HTML 5 Session from Google IO 2011
HTML 5 and Beyond
Create apps to help small businesses and entrepreneurs navigate the Federal Government more effectively.. http://entrepreneurs.challenge.gov/
Official Rules from Challenge.gov
Apps for Entrepreneurs
For most entrepreneurs and small businesses, the Federal government has useful programs and services, but it can be hard to identify, engage and navigate Federal websites. Often, small businesses do not know that the Federal government already offers a program that they would find useful. Entrepreneurs and small businesses need better tools to navigate the Federal government’s vast resources – including programs, services, and procurement opportunities.
The Competition Goals
The Apps for Entrepreneurs Competition (the “Competition”) is an initiative of the U.S. Small Business Administration (SBA) to help make Federal government programs and information more useful to small businesses. The Competition will provide recognition to individuals or teams of individuals for developing innovative applications designed for the Web, a personal computer, a mobile handheld device, console, or any platform broadly accessible on the open internet that utilize data which is freely available on Federal government websites.
The Apps submitted for this Competition must use data from at least one of the following sources: Small Business Administration http://www.sba.gov/api; Small Business Innovation Research Program http://www.sbir.gov/apis; Green Government Opportunities for Small Business http://green.sba.gov/apis or Data.gov www.data.gov.
Submission Period: 1:00 am EST, November 5, 2011 to 11:00 pm ET, November 20, 2011
Judging Period: November 21-22, 2011
Winners Announced: November 23, 2011
How to Enter
Interested persons should read the official rules before entering the Competition. All Contestants must submit their Apps through the Challenge.gov portal. Please note, in order to submit an App, Contestants will first need to register and create an account with Challenge.gov. Contestants will:
- Prior to submitting an App, register with www.challenge.gov. Registration is free
- From the Competition webpage on Challenge.gov http:// entrepreneurs.challenge.gov, use the “Enter a Submission” tab to submit a description of the app, outline system requirements to run the app and provide a link to a fully functioning app hosted outside of Challenge.gov. Once an App is submitted, the Contestant cannot make any changes or alterations to any part of the Submission.
- After submission, all apps will be screened by SBA for malicious code or other security issues.
- Screened submissions will be posted on the Challenge.gov Competition webpage on a rolling basis. Apps failing to meet Submission Requirements or other Submission screenings will be deemed ineligible to win a prize. Posting an app to the Competition website does not constitute SBA’s final determination of Contestant or the app’s eligibility.
Seven (7) prizes are available:
1st Place: Gift Card ($5,000 value) 1
2nd Place: Gift Card ($3,000 value) 3
3rd Place: Gift Card ($2,000 value) 3
The winners of these prizes (collectively, “Winners”) will be announced on November 23, 2011. Only one prize will be awarded for each winning submission, regardless of the number of Contestants that created on the winning app. SBA reserves the right to substitute prizes of similar value without notice.
Prior to judging, all submitted apps will be screened for Contestant eligibility, completeness of submission and malicious code.
The members of the Judging panel will be selected by SBA at its sole discretion and will be comprised of up to ten technology, small business, and entrepreneurship experts from both the public and private sectors. Judges will be screened by SBA to ensure he/she does not: (1) have personal or financial interest in any Contestant; or (2) have a familial relationship with a Contestant.
The Judging Panel will rate each Submission approved by the screening panel on the following criteria:
- Use of Required Data: Does the application use a combination of creative and relevant Federal data sets, including at least one data set from either http://www.sba.gov/apis; http://www.sbir.gov/apis; www.data.gov/apis or http://green.sba.gov/apis (25%)
- Technical Implementation: Is the application functional, well designed and simple to use? It is accessible to a wide range of users including those with disabilities? (25%);
- Mission and Impact: Does the application meet the goals of the Competition? Application will be rated on the strength of its potential to make Federal government programs and information more useful to small businesses (25%);
- Creativity: Is the application innovative, creative and interesting? (25%)
In the event of a tie, the SBA Administrator will select the winner.
The Competition is open to citizens or permanent residents of the United States who are at least eighteen (18) years old at the time of entry and teams of individuals where each individual is a U.S. citizen or permanent resident at least 18 years of age (collectively referred to as “Contestants”). Eligible Contestants may submit more than one app and/or participate on more than one team.
Any Submissions developed with Federal funding, either grant, contract or loan proceeds are not eligible to win. Federal employees and their immediate families, current SBA contractors and SBA grant recipients may enter the Competition but are not eligible to win. Immediate family members includes spouses, siblings, parents, children, grandparents, and grandchildren, whether as “in-laws”, or by current or past marriage, remarriage, adoption, co-habitation or other familial extension, and any other persons residing at the same household location, whether or not related.
In order for an entry to be eligible to win this Competition, the entry must meet the following requirements:
- General – Contestants must host their own app during the submission and judging process and ensure SBA has continued access to the app throughout the judging process.
- Availability – Submissions must be free to the public during the Competition and for at least one year after.
- Acceptable Platforms – The app must be designed for the Web, a personal computer, a mobile handheld device, console, or any platform broadly accessible on the open internet.
- Data – The app must utilize Federal government data and/or information available from any publically available Federal source, though they need not include all data fields or information available in a particular resource. Submissions must use at least one data set from any of the following resources: Small Business Administration http://www.sba.gov/apis; Small Business Innovation Research Program http://www.sbir.gov/apis; Green Government Opportunities for Small Business http://green.sba.gov/apis or Data.gov data.gov .
- Accessibility – The app must be accessible to a wide range of users, including users with disabilities (see Federal standards under Section 508 of the Rehabilitation Act http://www.section508.gov/index.cfm?fuseAction=stdsdoc ).
- Deadlines and Modifications – All Competition submissions must be submitted through the Challenge.gov portal by November 20, 2011 at 11:59 PM ET. Once an app is submitted through challenge.gov portal it must remain unchanged and unaltered until after the judging period.
- Intellectual Property – The Submission must not infringe any copyright or any other rights of any third party.
- No SBA logo – The app must not use SBA’s logo or official seal in the Submission, and must not claim SBA endorsement. The award of a prize in this Competition does not constitute an endorsement of a specific product by SBA or the Federal government.
- Functionality/Accuracy – A Submission may be disqualified if the application fails to function as expressed in the description provided by the Contestant, or if the application provides inaccurate information.
- Security – Submissions must be free of malware. Contestant agrees that SBA may screen the app to determine whether malware or other security threats may be present. SBA may disqualify the app if, in SBA’s judgment, the app may damage government or others’ equipment or operating environment.
- All Competition submissions must also adhere to the Challenge.gov Standards of Conduct (http://challenge.gov/terms#standards).
By making a Submission under this Competition, each Contestant warrants that he or she is the sole author and owner of the Submission, that the Submission is wholly original with the Contestant (or is an improved version of an existing app that the Contestant has sufficient rights to use – including the substantial improvement of existing open-source apps), and that it does not infringe any copyright or any other rights of any third party. Each Contestant also warrants that the app is free of malware.
All tools submitted to the SBA Apps for Entrepreneurs Competition remain the intellectual property of the individuals or teams that developed them. By registering and entering a Submission, however, the Contestant agrees that SBA reserves an irrevocable, nonexclusive, royalty-free license to use, copy, distribute to the public, create derivative works from, and publicly display a Submission for a period of one year, starting on the date of the announcement of the Winners, and to authorize others, including the general public, to use the Submission without restriction on a royalty-free basis. The reservation of SBA rights to authorize use of the Submission by the public includes the Contestant’s assent to SBA’s release of the application under an open-source software license, if SBA so chooses. The Contestant agrees to execute a separate license with SBA, as appropriate, for such purposes.
Verification of Winners
Winners must continue to comply with all terms and conditions of these Official Rules, and winning is contingent upon fulfilling all requirements contained herein. The Winners will be notified by email by November 23, 2011. The Winner’s name(s) will also be posted on the Competition and/or SBA website. In the event that a potential Winner is disqualified for any reason, SBA may award the applicable recognition to an alternate Contestant.
Participation in the Competition constitutes consent to SBA’s and its agents’ use of Competition winners’ name, likeness, photograph, voice, opinions, and/or hometown and state for promotional purposes in any media, worldwide, without further payment or consideration.
Liability and Insurance
The Contestant shall be liable for, and shall indemnify and hold harmless the Federal government against, all action or claims, including but not limited to those for loss of or damage to property (such as damage that may results from a virus, malware, etc. to SBA computer systems or those of the end-users of the software and/or applications), resulting from the fault, negligence, or wrongful act or omission of the Contestant.
Based on the subject matter of the Competition, the type of work that it will possibly require, and the likelihood of any claims for death, bodily injury, or property damage, or loss potentially resulting from contest participation, Contestants are not required to obtain liability insurance or demonstrate fiscal responsibility in order to participate in this Competition.
This Competition is subject to all applicable Federal laws and regulations.
SBA and its agents are not responsible for:
- Any incorrect or inaccurate information, whether caused by Contestants, printing errors, or by any of the equipment or programming associated with or utilized in the Challenge;
- Technical failure of any kind, including, but not limited to malfunctions, interruptions or disconnections in phone lines or network hardware or software;
- Unauthorized human intervention in any part of the entry process or the Competition;
- Technical or human error which may occur in the administration of the Competition or the processing of entries; or
- Any injury or damage to persons or property which may be caused, directly or indirectly, in whole or in part, from Contestant’s participation in the Competition or receipt, use or misuse of any prize.
If for any reason a Contestant’s entry is confirmed to have been erroneously deleted, lost, or otherwise destroyed or corrupted, Contestant’s sole remedy is another entry in the Competition.
SBA reserves the right at any time, for any reason, to cancel, suspend, and/or modify the Competition, or any part of it.
Participation constitutes each Contestant’s full and unconditional agreement to these Official Rules and administrative decisions which are final and binding in all matters related to the Competition. This contest notice is not an obligation of funds; the final award of prizes is contingent upon the availability of appropriations.
Newton Lee, my friend and former colleague at Media Station Inc. sent this news to me:
11.16.11 Special News: ACM CiE, Disney Stories, and C5
Dear friends, colleagues, and supporters of ACM CiE,
I am pleased to share with you the exciting news:
1. We will be launching the new interactive ACM Computers in Entertainment website soon. Please look out for an email from ACM. Please send me your articles, interviews, blogs, and videos on the topics of games, art, music, TV, movies, society, education, et al.
2. My new book Disney Stories: Getting to Digital (Springer 2012) is now available for pre-order on Amazon: http://www.amazon.com/exec/obidos/ASIN/1461421004/newtonlee-20/
Please forward me the Amazon order confirmation and the info of the charity of your choice, 15% of my revenue will be donated to your charity!
3. Registration is now open for the 10th International Conference on Creating, Connecting and Collaborating through Computing (C5) to be held on 18-20 January 2012 at USC ICT. http://www.cm.is.ritsumei.ac.jp/c5-12/ It is an exciting conference you don’t want to miss.
Founder and co-Editor-in-Chief
ACM Computers in Entertainment
read more at:
The Art Institute of Michigan featuring local animators including Chris Carden and Dale Myers on November 16, 2011.
Register The event details are as follows
The Art Institute of Michigan
Animation speakers & presentation
Nov 16 2011, 06:30 PM to 08:30 PM
28125 Cabot Drive Suite 120, Novi, MI 48377
Click Here For Detail:
For further information contact:
Detroit ACM SIGGRAPH
Welcome to the BlendersUX Blog. We are a designer group for web and mobile app technologies across platforms..
We meet in Ann Arbor, Michigan at Washtenaw Community College.
You can catch us both on Facebook at:
And on our Google Plus page at:
Our next meeting is: Tuesday, November 15 · 8:00pm – 9:00pm
Location: Washtenaw Community College
Vending machines are available for refreshments.
November 15, 2011 Agenda
Our Meeting (tag-teamed with the Michigan Interactive Meeting) will feature Greg Good from PixelOasis.com who will walk us through creating a mobile application with Adobe Flash Builder.
We will further work on planning our inaugural meeting in December at Washtenaw Community College. Which is tentatively scheduled to feature a local Apple iOS Design/Development Expert.