Quantcast
Channel: Building Apps for Windows
Viewing all 623 articles
Browse latest View live

Kevin Gallo gives the developer perspective on today’s Windows 10 Event

$
0
0

Did you see the Microsoft Windows 10 Event this morning?  Satya, Terry, and Panos talked about some of the exciting new features coming in the Windows 10 Creators Update and announced some amazing new additions to our Surface family of devices. If you missed the event, be sure to check it out here.

As a developer, my first question when I see new features or new hardware is “What can I do with that?” We want to take advantage of the latest and coolest platform capabilities to make our apps more useful and engaging.

There were several announcements today that offer exciting opportunities for Windows developers.  Three of these that I want to tell you about are:

  • 3D in Windows 10 along with the first VR headsets capable of mixed reality through the Windows 10 Creators update.
  • Ability to put the people you care about most at the center of your experience—right where they belong—with Windows MyPeople
  • Surface Dial, a new input peripheral designed for the creative process that integrates with Windows and is complimentary to other input devices like pen. It gives developers the ability to create unique multi-modal experiences that can be customized based on context. The APIs work in both Universal Windows Platform (UWP) and Win32 apps.

Rather that write a long blog post, I decided to go down to our Channel 9 studios and record a video that gives my thoughts and provides what I hope will be a useful developer perspective on today’s announcements.  Here’s my conversation with Seth Juarez from Channel 9:

My team and I are working hard to finish the platform work that will fully support the Windows 10 Creators Update, but you can start experimenting with many of the things we talked today. Windows Insiders can download the latest flight of the SDK and get started right away.

If you want to dig deeper on the Surface Dial, check out the following links:

Stay tuned to this space for more information in the coming weeks as we get closer to the release of the Windows 10 Creator’s update.  In the meantime, we always love to hear from you and welcome your feedback at the Windows Developer Feedback site.


Going social: Project Rome, Maps & social network integration (App Dev on Xbox series)

$
0
0

The Universal Windows Platform is filled with powerful and unique capabilities that allow the creation of some remarkable experiences on any device form factor. This week we are looking at an experience that builds on top of the Adventure Works sample we released last week by adding a social experience with the capability (1) to extend the experience to other devices that the user owns through the Project “Rome” APIs, (2) to be location aware using the powerful Maps API, and (3) to integrate with third-party social networks. As always, you can get the latest source code of the app right now on GitHub and follow along.

And if you missed last week’s article on how to enable great camera experiences, we covered how to build UWP apps that take advantage of camera APIs on the device and in the cloud through the Cognitive Services APIs to capture, modify, and understand images. To read last week’s blog post or any of the other blog posts in the series, or to watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works (v2)

screen-shot-2016-10-27-at-1-42-33-pm

To give you a quick recap of the sample app, we released the Adventure Works source code last week and we discussed how we used a combination of client and cloud APIs to create a camera app capable of understanding images, faces and emotion, as well as being able to modify the images by applying some basic effects. Building on top of that, the goal for Adventure Works is to create a larger sample app that extends the experience, to add more social features in which users can share photos and albums of their adventures with friends and family across multiple devices. Therefore, we’ve extended the sample app by:

  1. Adding the ability to have shared second screen experiences through Project Rome
  2. Adding location and proximal information for sharing with the location and Maps APIs
  3. Integrating with Facebook and Twitter for sharing by using the UWP Toolkit.

Project Rome

Most people have multiple devices, and often begin an activity on one device but end up finishing it on another. To accommodate this, apps need to span devices and platforms.

The Remote Systems APIs, also known as Project Rome, enable you to write apps that let your users start a task on one device and continue it on another. The task remains the central focus, and users can do their work on the device that is most convenient for them. For example, you might be listening to the radio on your phone in the car, but when you get home you may want to transfer playback to the Xbox One that is hooked up to your home stereo system.

The Adventure Works app takes advantage of Project Rome in order to create a second screen experience. It uses the Remote System APIs to connect to companion devices for a remote control scenario. Specifically, it uses the app messaging APIs to create an app channel between two devices to send and receive custom messages. Devices can be connected proximally through Bluetooth and local network or remotely through the cloud, and are connected by the Microsoft account of the person using them.

In Adventure Works, you can use a tablet, phone or even your desktop as a second experience for a slideshow displayed on your TV through the Xbox One. The slideshow images can be controller easily on the Xbox through the remote or controller, and the second screen experience allows the same. However, with the second device, the user has the ability to view all photos at once, select which one to show on the big screen and even take advantage of the capabilities of the smaller device otherwise not available to the Xbox, such as enabling inking on images for a collaborative experience.

screen-shot-2016-10-27-at-1-42-54-pm

Adventure Works uses Project Rome in two places to start the second screen experience. First, when a user navigates to a collection of photos, they can click on Connect at the top to see available systems and connect to one of them. Or, if the Xbox is already showing a slideshow, a companion devices will prompt the user to start controlling the experience.

screen-shot-2016-10-27-at-1-43-36-pm

For these scenarios to work, the app needs to be aware of other devices, and that is where Project Rome comes in. To start the discovery of devices, use the RemoteSystem.CreateWatcher method to create a remote system watcher and subscribe to the appropriate events before calling the Start method (see code on GitHub):


_remoteSystemWatcher = RemoteSystem.CreateWatcher(BuildFilters());
_remoteSystemWatcher.RemoteSystemAdded += RemoteSystemWatcher_RemoteSystemAdded;
_remoteSystemWatcher.RemoteSystemRemoved += RemoteSystemWatcher_RemoteSystemRemoved;
_remoteSystemWatcher.RemoteSystemUpdated += RemoteSystemWatcher_RemoteSystemUpdated;
_remoteSystemWatcher.Start();

The BuildFilters method simply creates a list of filters for the watcher. For the purposes of Adventure Works we chose to limit the discovery to only Xbox and Desktop devices that are available in proximity.

We wanted to be able to launch the app on the Xbox from any other device and go directly to the slideshow. We first declared a protocol in the app manifest and implemented the OnActivated method in App.xaml.cs to launch the app directly to the slideshow. Once this was done, we were able to use the RemoteLauncher.LaunchUriAsync command to launch the the slideshow on the remote app if it wan’t already running (see code on GitHub).

var launchUriStatus =
    await RemoteLauncher.LaunchUriAsync(
        new RemoteSystemConnectionRequest(system.RemoteSystem),
        new Uri("adventure:" + deepLink)).AsTask().ConfigureAwait(false);

To control the slideshow, we needed to be able to send and receive messages between the two devices. We covered AppServiceConnection in a previous blog post, but it can also be used to create a messaging channel between apps on different devices using the OpenRemoteAsync method (see code on GitHub).


var appService = new AppServiceConnection()
{
    AppServiceName = "com.adventure",
    PackageFamilyName = Windows.ApplicationModel.Package.Current.Id.FamilyName
};

RemoteSystemConnectionRequest connectionRequest = new RemoteSystemConnectionRequest(remoteSystem);
var status = await appService.OpenRemoteAsync(connectionRequest);

if (status -= AppServiceConnectionStatus.Success)
{
    var message = new ValueSet();
    message.Add("ping", "");
    var response = await appService.SendMessageAsync(message);
}

Once the app is running, both the client and the host can send messages to communicate status and control the slideshow. Messages are not limited to simple strings; arbitrary binary data can be sent over, such as inking information. (This messaging code happens in SlideshowClientPage and SlideshowPage, and the messaging events are all implemented in the ConnectedService source file.)

For example, in the client, the code to send ink strokes looks like this:


var message = new ValueSet();
message.Add("stroke_data", data); // data is a byte array
message.Add("index", index);
var response = await ConnectedService.Instance.SendMessageFromClientAsync(message, SlideshowMessageTypeEnum.UpdateStrokes);

The message is sent over using ValueSet objects and the host handles the stroke messages (along with other messages) in the ReceivedMessageFromClient handler:


private void Instance_ReceivedMessageFromClient(object sender, SlideshowMessageReceivedEventArgs e)
{
    switch (e.QueryType)
    {
        case SlideshowMessageTypeEnum.Status:
            e.ResponseMessage.Add("index", PhotoTimeline.CurrentItemIndex);
            e.ResponseMessage.Add("adventure_id", _adventure.Id.ToString());
            break;
        case SlideshowMessageTypeEnum.UpdateIndex:
            if (e.Message.ContainsKey("index"))
            {
                var index = (int)e.Message["index"];
                PhotoTimeline.CurrentItemIndex = index;
            }
            break;
        case SlideshowMessageTypeEnum.UpdateStrokes:
            if (e.Message.ContainsKey("stroke_data"))
            {
                var data = (byte[])e.Message["stroke_data"];
                var index = (int)e.Message["index"];
                HandleStrokeData(data, index);
            }
            break;
        default:
            break;
    }
}

As mentioned above, the user should be able to directly jump into an ongoing slideshow. As soon as MainPage is loaded, we try to find out if there are any devices already presenting a slideshow. If we find one, we prompt the user to start controlling the slideshow remotely. The code to search for other devices, below (and on GitHub), returns a list of AdventureRemoteSystem objects.


public async Task<List<AdventureRemoteSystem>> FindAllRemoteSystemsHostingAsync()
{
    List<AdventureRemoteSystem> systems = new List<AdventureRemoteSystem>();
    var message = new ValueSet();
    message.Add("query", ConnectedServiceQuery.CheckStatus.ToString());

    foreach (var system in Rome.AvailableRemoteSystems)
    {
        var reponse = await system.SendMessage(message);
        if (reponse != null && reponse.ContainsKey("status"))
        {
            var status = (ConnectedServiceStatus)Enum.Parse(typeof(ConnectedServiceStatus), (String)reponse["status"]);
            if (status == ConnectedServiceStatus.HostingConnected || status == ConnectedServiceStatus.HostingNotConnected)
            {
                systems.Add(system);
            }
        }
    }

    return systems;
}

An AdventureRemoteSystem is really just a wrapper around the base RemoteSystem class found in Rome and is used to identify instances of the Adventure Works app running on other devices like Surface tablets, Xbox One and Windows 10 phones.

Make sure to check out the full source code and try it on your own devices. And if you want to learn even more, make sure to check out the Cross-device experiences with Project Rome blog post.

Maps and location

As part of building out Adventure Works, we knew that we wanted to develop an app that showed a more social experience, so we added a way to see the adventures of out fictional friends and the location of those adventures. UWP supports rich map experience by providing controls to display maps with 2D, 3D or Streetside views by using APIs from the Windows.UI.Xaml.Controls.Maps namespace. You can mark points of interest (POI) on the map by using pushpins, images, shapes or XAML UI elements. You can use location services with your map to find notable places and you can even use overlay tiled images or replace the map images altogether.
screen-shot-2016-10-27-at-1-43-54-pm

The UWP Maps APIs provide powerful yet simple tools for working with and customizing location data. For instance, in order to get the user’s current location, you use the Geolocator class to request the current geoposition of the device:


var accessStatus = await Geolocator.RequestAccessAsync();
switch (accessStatus)
{
    case GeolocationAccessStatus.Allowed:

        // Get the current location.
        Geolocator geolocator = new Geolocator();
        Geoposition pos = await geolocator.GetGeopositionAsync();
        return pos.Coordinate.Point;

    default:
        // Handle the case if  an unspecified error occurs
        return null;
}

With this location information in hand, you can then create a MapIcon object based on it and add it to your map control.


if (currentLocation != null)
{
    var icon = new MapIcon();
    icon.Location = currentLocation;
    icon.NormalizedAnchorPoint = new Point(0.5, 0.5);
    icon.Image = RandomAccessStreamReference.CreateFromUri(new Uri("ms-appx:///Assets/Square44x44Logo.targetsize-30.png"));

    Map.MapElements.Add(icon);
}

Adding the friends on the map is similar but we used XAML elements instead of a MapIcon, giving us the ability to focus through each one using the controller or remote on the Xbox.


Map.Children.Add(button);
MapControl.SetLocation(button, point);
MapControl.SetNormalizedAnchorPoint(button, new Point(0.5, 0.5));

Directional navigation works best when focusable elements are layed out in a grid layout. Because the friends can be layed out randomly on the map, we wanted to make sure that the focus experience works great with the controller. We used the XYFocus properties of the buttons to specify how the focus should move from one to the other. We used the longitude to specify the order so the user can move through each friend left and right, and down will bring the focus to the main controls. To see the full implementation, take a look at the project on GitHub.


foreach (var button in orderedButtons)
{
    button.XYFocusUp = button;
    button.XYFocusRight = button;
    button.XYFocusLeft = previosBtn != null ? previosBtn : button;
    button.XYFocusDown = MainControlsViewOldAdventuresButton;

    if (previosBtn != null)
    {
        previosBtn.XYFocusRight = button;
    }

    previosBtn = button;
}
if (orderedButtons.Count() > 1)
{
    orderedButtons.Last().XYFocusRight = orderedButtons.First();
    orderedButtons.First().XYFocusLeft = orderedButtons.Last();
}

While the Adventure Works app only uses geolocation for the current device, you can easily extend it to do things like find nearby friends. You should also consider lighting up additional features depending on which device the app is running on. Since it is really more of a mobile experience than a living room experience, you can add a feature like finding great nearby places to take photos but only enable it when the app is installed on a phone.

Facebook and Twitter integration (and the UWP Community Toolkit)

What’s more social than being able to share adventures and photos to your favorite social networks, and the UWP Community Toolkit includes service intergration for both Facebook and Twitter, simplifying OAuth authentication along with your most common social tasks.

The opensource toolkit includes new helper functions; animations; tile and toast notifications; custom controls and app services that simplify or demonstrate common developer tasks; and has been used extensively throughout Adventure Works. It can be used with any new or existing UWP app written in C# or VB.NET and the app can be deployed to any Windows 10 device including the Xbox One. Because it is strongly aligned with the Windows SDK for Windows 10, feedback about the toolkit will be incorporated in future SDK releases. And it just makes common tasks easy and simple!

screen-shot-2016-10-27-at-1-44-11-pm

For instance, logging in and posting to Twitter can be accomplished in only three lines of code.


// Initialize service, login, and tweet
TwitterService.Instance.Initialize("ConsumerKey", "ConsumerSecret", "CallbackUri");
await TwitterService.Instance.LoginAsync();
await TwitterService.Instance.TweetStatusAsync("Hello UWP!", imageStream)

The Adventure Works app lets users authenticate with either their Twitter account or Facebook account. The standard UWP Toolkit code for authenticating with Twitter is shown above. Doing the same thing with Facebook is just as easy.


FacebookService.Instance.Initialize(Keys.FacebookAppId);
success =  await FacebookService.Instance.LoginAsync();
await FacebookService.Instance.PostPictureToFeedAsync("Shared from Adventure Works", "my photo", stream);

Take a look at the Identity.cs source file on GitHub for the full implementation in Adventure Works, and make sure to visit the UWP Community Toolkit GitHub page to learn more. The toolkit is written for the community and fully welcomes the developer community’s input. It is intended to be a repository of best practices and tools for those of us who love working with XAML platforms. You can also preview the cap­­­­abilities of the toolkit by downloading the UWP Community Toolkit Sample App in the Windows Store.

That’s all for now

Make sure to check out the app source on our official GitHub repository, read through some of the resources provided, watch the event if you missed it and let us know what you think through the comments below or on Twitter @WindowsDev.

Don’t miss the last blog post of the series next week, where we’ll share the finished Adventure Works sample app and discuss how to take advantage of more personal computing APIs such as speech and inking.

Until then, happy coding!

Resources for Hosted Web Apps

 

Bringing 3D to everyone through open standards

$
0
0

Earlier this week, at the Microsoft Windows 10 Event, we shared our vision (read more about it from Terry Myerson and Megan Saunders) around 3D for everyone in New York. As part of achieving that vision we are delighted to share that Microsoft is joining the 3D Formats working group at Khronos to collaborate on its GL Transmission Format (glTF).

At Microsoft, we are committed to an open and interoperable 3D content development ecosystem.  As 3D content becomes more pervasive, there is a need for a common, open and interoperable language to describe, edit, and share 3D assets between different applications. glTF fills this need as an expressive and capable open standard.

We look forward to collaborating with the community and our industry partners to help glTF deliver on its objectives and achieve broad support across many devices and applications. To further the openness goal, we will continue our open source contributions including further development of glTF support in the open source frameworks such as BabylonJS.

As the working group starts thinking about the next version, we are especially interested in joining discussions about some of the subjects that have seen the biggest community momentum in the public forums. Physically Based Rendering (PBR) material proposal is one of those topics. PBR materials are a flexible way for 3D content creators to specify the rendering characteristics of their surfaces. Industry-standard implementations can ensure that any PBR content will look consistent irrespective of the scene lighting and environment. Additionally, because PBR material definition is a high-level abstraction that is not tied to any specific platform, 3D assets with PBR materials can be rendered consistently across platforms.

This kind of cross-platform, cross-application power is what will ultimately make glTF truly ubiquitous and Microsoft is proud to be part of this journey.
Forest W. Gouin – Windows Experiences Group
Jean Paoli – Windows Developer Platform

Just released – Windows developer evaluation virtual machines – October 2016 build

$
0
0

We’re releasing the October 2016 edition of our evaluation Windows developer virtual machines (VM) on Windows Dev Center. The VMs come in Hyper-V, Parallels, VirtualBox and VMWare flavors and will expire on 01/17/17.

These installs contain:

If you want a non-evaluation version, we licensed virtual machines as well but you’ll need a Windows 10 Pro key.  The Azure portal also has virtual machines you can spin up with the Windows Developer tooling installed too!

If you have feedback on the VMs, please provide it over at the Windows Developer Feedback UserVoice site.

 

In Case You Missed It – This Week in Windows Developer

$
0
0

This week in the world of Windows – announcements and tutorials big and small ushered in next-level capabilities for all developers, from mixed reality support with the new Windows 10 Creators Update to a “how-to” on location sharing and more with Project Rome.

On to the recap!

High DPI support means beautiful apps at any size

Users today work on and with all types of devices, with all sizes of screen: wearables with miniscule displays, desktops with major resolution, large format screens with millions of pixels. How do you make your app look good on any device? Windows now has the support with improvements in high DPI (dots per inch) scaling. Get the latest – click through below.

 

Project Rome helps apps go social in the latest App Dev on Xbox post

Our App Dev on Xbox series continued with a tutorial on the capabilities of Project Rome, location sharing and more. Through the sample app Adventure Works, our team brought you the ability to have shared second screen experiences through Project Rome; location and proximal information for sharing with the location and Maps APIs; and integration with Facebook and Twitter for sharing via the UWP Toolkit.

Try it out for yourself – follow the tutorial linked below:

 

Surface Studio, Surface Dial and the Windows 10 Creators Update – oh my!

We’re a bit biased – but perhaps the most exciting news for our dev community this week came from Wednesday’s Microsoft Windows 10 Event. From Surface Studio and Surface Dial to the Windows 10 Creators Update, there will soon be an even greater wealth of opportunity for developers in input methods to incorporate into their apps, support for mixed reality experiences and much, much more.

If you didn’t watch the Microsoft Windows 10 Event livestream, check out some highlights from the event below:


 

And read Kevin Gallo’s take on what these announcements mean for developers:


 

Microsoft joins the 3D Formats working group at Khronos

We’re delighted to share that Microsoft is joining the 3D Formats working group at Khronos to collaborate on its GL Transmission Format (glTF). Among the areas we’re excited to dig into – with such clear enthusiasm and interest from the developer community – is Physically Based Rendering (PBR) material. Read more about next steps for the working group here.

October virtual machine updates are live!

And last, but certainly not least – the latest virtual machine updates have gone live. Read all about the installs by clicking through below:

 

Happy coding to all!

The “Internet of Stranger Things” Wall, Part 1 – Introduction and Remote Wiring

$
0
0

Overview

I am a child of the 80s. Raiders of the Lost Ark was the first movie I saw by myself in the movie theater. The original Star Wars trilogy was an obsession. And WarGames is one thing that inspired me more than anything else to become a programmer.

But it was movies like The Goonies that I would watch over and over again because they spoke to me in a language that reflected what it was like to be a kid at that time. They took kids on a grand adventure, while still allowing them to be kids in a way that so few movies can pull off.

So, of course when a friend pointed out the Netflix series Stranger Things, I dove right in, and while sitting down at my PC I binge-watched every episode over a weekend. It had a treatment of 80s childhood that was recognizable, without being a painful cliché. It referenced movies like The Goonies, ET, and The X-Files in a really fun way.

If you haven’t yet watched the series, go ahead and watch it now. This blog post will still be here when you finish up. 🙂

One of the most iconic scenes in the movie is when Winona Ryder, herself a star of some of my favorite 80s and 90s movies, uses an alphabet wall made of Christmas lights to communicate with her son Will, who is stuck in the Upside Down.

While not physically there, Will could still hear her. So, she would ask him a question and he would respond by lighting up the individual Christmas light associated with each letter on the wall. In the show, the alphabet wall takes up one whole wall in her living room.

I won’t go into more detail than that because I don’t want to spoil the show for those who have not yet seen it or for those who didn’t take my advice to stop and watch it now.

Here’s my smaller (approximately 4’ x 4’) version of the alphabet wall as used during my keynote at the TechBash 2016 conference in Pennsylvania:

image1

“Will? Will? Are you there?”

In the events I used it in, I put on a wig that sort-of resembled Winona’s frazzled hair in the series (but also made me look like part of a Cure cover band), and had my version of the theme/opening music playing on an Elektron Analog Four synthesizer/sequencer in the background. I then triggered the wall with a question and let it spell out the answer with the Christmas lights on the board.

Here’s a block diagram of the demo structure. You can see it involves a few different pieces, all of which are things I enjoy playing with.

image2

In this three-part series, I’ll describe how I built the wall, what products I used, how I built the app, how I built and communicated with the bot framework-based service, and how I made the music. In the end, you should have enough information to be able to create your own version of the wall. You’ll learn about:

  • Windows Remote Wiring
  • LCD Sink ICs
  • Constructing the Wall
  • Wiring the LED Christmas lights
  • Adding UWP voice recognition
  • Setting up a natural language model in LUIS
  • Building a Bot Framework-based bot
  • Music and MIDI
  • And more

There will be plenty of code and both maker and developer-focused technical details along the way.

This first post will cover:

  • Creating the UWP app
  • Windows Remote Wiring
  • Using the MBI5026 LED sink driver

If you’re unfamiliar with the show or the wall, and want to see a quick online-only version of a Stranger Things alphabet wall you can see one at http://StrangerThingsGIFGenerator.com. Example:

new-gif

The remainder of the series will be posted this week. Once they are up, you’ll be able to find the other posts here:

  • Part 1 – Introduction and Remote Wiring (this post)
  • Part 2 – Constructing the wall and adding music
  • Part 3 – Adding voice recognition and intelligence

Creating the basic UWP app

This app is something I used for demonstrating at a couple conferences. As such, it has an event-optimized UI — meaning big text that will show up well even on low contrast projectors. Additionally, it means I need a button to test the board (“Send Alphabet”), test MIDI (“Toggle MIDI”), echo back in case the network is down, and also submit some canned questions in case the network or bot service can’t be reached. When you do live demos, it’s always good to have backups and alternate paths so that a single point of failure doesn’t kill the entire demo. From experience, I can tell you that networks at venues, even speaker and keynote networks, are the single most common killer of cool demos.

This is the UI I put together.

image3

The microphone button starts voice recognition. In case of microphone failure (backups!) I can simply type in the text box — the message icon to the right submits the message. In the case of echo, it simply lights it up on the wall with the text, bypassing the online portion of the demo. In the case of the “Ask a question” field, it sends the message to a Bot Framework bot to be processed.

Despite the technologies I’m using, everything here starts with the standard C#/XAML UWP Blank App template in Visual Studio. I don’t need to use any specific IoT or bot-centric templates for the Windows 10 app.

I am on the latest public SDK version at the time of this post. This is important to note, because the NuGet MIDI library only supports that version (or higher) of the Windows 10 Anniversary Update SDK. (If you need to use an earlier version like 10586, you can compile the library from source.)

I use the Segoe MDL2 Assets font for the icons on the screen. That font is the current Windows standard iconography font. There are a few ways to do this in XAML. In this case, I just set the font and pasted in the correct Unicode value for the icon (you can use Character Map or another app if you wish). One very helpful resource that I use when working with this font is the ModernIcons.io Segoe MDL2 Assets – Cheatsheet site. It gives you the Unicode values in a markup-ready format, making it super easy to use in your XAML or HTML app.

image4

There’s also a free app which you may prefer over the site.

The rest of the UI is standard C# and XAML stuff (I’m not doing anything fancy). In fact, when it comes to program structure you’ll find this demo wanting. Why? When I share this source code, I want you to focus on what’s required to use any of these technologies rather than taking a cognitive hit trying to grok whatever design pattern I used to structure the app. Unless specifically trying to demonstrate a design pattern, I find over-engineered demo apps cumbersome to trod through when looking for a chunk of code to solve a specific problem.

Windows Remote Wiring Basics

When I built this, I wanted to use it as a way to demonstrate how to use Windows Remote Wiring (also called Windows Remote Arduino). Windows Remote Wiring makes it possible to use the IO on an Arduino from a Windows Store app. It does this by connecting to the Arduino through a USB or Bluetooth serial connection, and then using the Firmata protocol (which is itself built on MIDI) to transfer the pin values and other commands back and forth.

Typically used with a PC or phone, you can even use this approach with a Windows 10 IoT Core device and an Arduino. That’s a quick way to add additional IO or other capabilities to an IoT project.

For a primer on Remote Wiring, check the link above, or take a look at this video to learn a bit more about why we decided to make this possible:

Remoting in this way has slower IO than doing the work directly on the Arduino, but as an example this is just fine. If you were going to do something production-ready using this approach, I’d recommend bringing the calls up to a higher level and remote commands (like “Show A”) to the Arduino instead of remoting the pin values and states.

The reason the PC is involved at all is because we need the higher-level capabilities offered by a Windows 10 PC to communicate with the bot, do voice recognition, etc. You could also do these on a higher level IoT Core device like the Intel Joule.

Remote wiring is an excellent way to prototype a solution from the comfort of your PC. It’s also very useful when you’re trying to decide what capabilities you’ll ultimately need in the final target IoT board. The API is very similar to the Windows.Devices.Gpio APIs, so moving to Windows 10 IoT Core when moving to production is not very difficult at all.

For my project, I used a very long USB cable. I didn’t want to mess around with Bluetooth at a live event.

To initialize the Arduino connection in this project, I used this code in my C# standard Windows 10 UWP app:


RemoteDevice _arduino;
UsbSerial _serial;

private const string _vid = "VID_2341";
private const string _pid = "PID_0043";


private void InitializeWiring
{
    _serial = new UsbSerial(_vid, _pid);
    _arduino = new RemoteDevice(_serial);

    _serial.ConnectionEstablished += OnSerialConnectionEstablished;

    _serial.begin(57600, SerialConfig.SERIAL_8N1);
}

I got the VID and PID from looking in the Device Manager properties for the connected Arduino. Super simple, right? I found everything I needed in our tutorial files and documentation.

The final step for Arduino setup is to set the pin modes. This is done in the handler for the ConnectionEstablished event.


private void OnSerialConnectionEstablished()
{

    //_arduino.pinMode(_sdiPin, PinMode.I2C);
    _arduino.pinMode(_sdiPin, PinMode.OUTPUT);
    _arduino.pinMode(_clockPin, PinMode.OUTPUT);
    _arduino.pinMode(_latchPin, PinMode.OUTPUT);
    _arduino.pinMode(_outputEnablePin, PinMode.OUTPUT);

    _arduino.digitalWrite(_outputEnablePin, PinState.HIGH); // turn off all LEDs

    ClearBoard(); // clear out the registers
}

private const UInt32 _clearValue = 0x0;
private async void ClearBoard()
{
    // clear it out
    await SendUInt32Async(_clearValue, 0);

}

The SendUInt32Async method will be explained in a bit. For now, it’s sufficient to know that it is what lights up the LEDs. Now to work on the electronics part of the project.

Arduino connection to the LCD sink ICs

There are a number of good ways to drive the LEDs using everything from specialized drivers to transistors to various types of two dimensional arrays (a 5×6 array would do it, and require 11 IO pins). I decided to make it super simple and dev board-agnostic and use the MBI5026GN LED driver chip, purchased from Evil Mad Scientist. A single MBI5026 will sink current from 16 LEDs. To do a full alphabet of 26 letters, I used two of these.

The MBI5026 is very simple to use. It’s basically a souped-up shift register with above-average constant current sinking abilities. I connected the LED cathodes (negative side) to the pins and the anode (positive side) to positive voltage. To turn on an LED, just send a high value (1) for that pin.

So for 16 pins with pins 0 through 5 and 12 and 15 turned on, that means that we would send a set of high/low values that looks like this:

image6

The MBI5026 data sheet explains how to pulse the clock signal so it knows when to read each value. There are a couple other pins involved in the transfer, which are also documented in the data sheet.

The IC also includes a pin for shifting out bits that are overflowing from its 16 positions. In this way, you can chain as many of these together as you want. In my case, I chained together two and always passed in 32 bits of data. That’s why I used a UInt32 in the above code.

In this app, I’ll only ever turn on a single LED at a time. So every value sent over will be a single bit turned on with the other thirty-one bits turned off. (This also makes it easier to get away with not worrying about the amp draw from the LEDs.)

To make mapping letters to the 32-bit value easier, I created an array of 32-bit numbers in the app and stored them as the character table for the wall. Although I followed alphabetic order when connecting them, this table approach also supports arbitrary connections of the LEDs as long as you keep alphabetical the actual order for the values in the array.


private UInt32[] _letterTable = new UInt32[]
{
    0x80000000, // A 10000000000000000000000000000000 binary
    0x40000000, // B 01000000000000000000000000000000
    0x20000000, // C 00100000000000000000000000000000
    0x10000000, // D ...
    0x08000000, // E
    0x04000000, // F
    0x02000000, // G
    0x01000000, // H
    0x00800000, // I
    0x00400000, // J
    0x00200000, // K
    0x00100000, // L
    0x00080000, // M
    0x00040000, // N
    0x00020000, // O
    0x00010000, // P
    0x00008000, // Q
    0x00004000, // R
    0x00002000, // S
    0x00001000, // T
    0x00000800, // U
    0x00000400, // V
    0x00000200, // W ...
    0x00000100, // X 00000000000000000000000100000000
    0x00000080, // Y 00000000000000000000000010000000
    0x00000040, // Z 00000000000000000000000001000000
};

These numbers will be sent to the LED sink ICs, LSB (Least Significant Bit) first. In the case of the letter A, that means the bit to turn on the letter A will be the very last bit sent over in the message. That bit maps to the first pin on the first IC.

LEDs require resistors to limit current and keep from burning out. There are a number of scientifically valid approaches to testing the LED lights and figuring out which resistor size to use. I didn’t use any of them, and instead opted to burn out LEDs until I found a reasonable value. 🙂

In reality, with the low voltage we’re using, you can get close using any online resistor value calculator and the default values. We’re not trying to maximize output here and the values would normally be different from color to color (especially blue and white vs. orange and red), in any case. A few hundred ohms works well enough.

Do note that the way the MBI5026 handles the resistor and sets the constant current is slightly different from what you might normally use. One resister is shared for all 16 LEDs and the driver is a constant current driver. The formula is given on page 9 of the datasheet.

IOUT = (VR-EXT / Rext ) x 15

But again, we’re only lighting one LED at a time and we’re not looking to maximize performance or brightness here. Additionally, we’re not using 16 LEDs at once. And, as said above, we also don’t know the actual forward current or forward voltage of the LEDs we’re using. If you want to be completely correct, you could have a different sink driver for each unique LED color, figure out the forward voltage and the correct resistor value, and then plug that in to the appropriate driver.

With that information at hand, it’s time to wire up the breadboard. Assuming I didn’t forget any, here’s the list of all the connections.

image7

Or if you prefer something more visual:

image8

I handled the wiring in two stages. In stage one, I wired the MBI5026 breadboard to the individual posts for each letter. This let me do all that fiddly work at my desk instead of directly on the wall. I used simple construction screws (which I had tested for conductivity) as posts to wire to.

You can see the result here, mounted on the back of the wall.

image9

You can see the individual brown wires going from each of the output pins on the pair of MBI5026 ICs directly to the letter posts. I simply wrapped the wire around the post; there is no solder or hot glue involved there. If you decide to solder the wires, use caution and be advised that the screws will sink a lot of the heat, likely to end up scorching the paper label and burning down all your hard work. The wire wrapped approach is easier and also easily repaired. It also avoids fire. Fire = bad.

The board I put everything on ended up being a bit large to fit between the rows on the back of the wall, so I took the whole thing over to the table saw. I’m the first person I know to take an Arduino, breadboard and wired circuit, and run it across a saw. It survived. 🙂

image10

In the Windows app, I wanted to make sure the code would allow taking an arbitrary string as input and would light up the LEDs in the right order. First, the code that processes the string:


public async Task RenderTextAsync(string message,
             int onDurationMs = 500, int delayMs = 0,
             int whitespacePauseMs = 500)
{
    message = message.ToUpper().Trim();

    byte[] asciiValues = Encoding.ASCII.GetBytes(message);

    int asciiA = Encoding.ASCII.GetBytes("A")[0];

    for (int i = 0; i < message.Length; i++)
    {
        char ch = message[i];

        if (char.IsWhiteSpace(ch))
        {
            // pause
            if (whitespacePauseMs > 0)
                await Task.Delay(whitespacePauseMs);
        }
        else if (char.IsLetter(ch))
        {
            byte val = asciiValues[i];
            int ledIndex = val - asciiA;

            UInt32 bitmap = _letterTable[ledIndex];

            // send the letter
            await SendUInt32Async(bitmap, onDurationMs);

            // clear it out
            await SendUInt32Async(_clearValue, 0);

            if (delayMs > 0)
                await Task.Delay(delayMs);

        }
        else
        {
            // unsupported character. Ignore
        }
    }
}

The code first gets the ASCII value for each character in the string. Then, for each character in the string, it checks to see if it’s whitespace or a letter. If neither, it is ignored. If whitespace, we delay for a specified period of time. If a letter, we look up the appropriate letter 32-bit value (a bitmap with a single bit turned on), and then send that bitmap to the LEDs, LSB first.

The code to send the 32-bit map is shown here:


private const int _latchPin = 7;            // LE
private const int _outputEnablePin = 8;     // OE
private const int _sdiPin = 3;              // SDI
private const int _clockPin = 4;            // CLK

// send 32 bits out by bit-banging them with a software clock
private async Task SendUInt32Async(UInt32 bitmap, int outputDurationMs)
{
    for (int i = 0; i < 32; i++)
    {
        // clock low
        _arduino.digitalWrite(_clockPin, PinState.LOW);

        // get the next bit to send
        var b = bitmap & 0x01;

        if (b > 0)
        {
            // send 1 value

            _arduino.digitalWrite(_sdiPin, PinState.HIGH);
        }
        else
        {
            // send 0 value
            _arduino.digitalWrite(_sdiPin, PinState.LOW);
        }

        // clock high
        _arduino.digitalWrite(_clockPin, PinState.HIGH);

        await Task.Delay(1);    // this is an enormous amount of time,
                                // of course. There are faster timers/delays
                                // you can use.

        // shift the bitmap to prep for getting the next bit
        bitmap >>= 1;
    }

    // latch
    _arduino.digitalWrite(_latchPin, PinState.HIGH);
    await Task.Delay(1);
    _arduino.digitalWrite(_latchPin, PinState.LOW);

    // turn on LEDs
    _arduino.digitalWrite(_outputEnablePin, PinState.LOW);

    // keep the LEDs on for the specified duration
    if (outputDurationMs > 0)
        await Task.Delay(outputDurationMs);

    // turn the LEDs off
    _arduino.digitalWrite(_outputEnablePin, PinState.HIGH);
}

This is bit-banging a shift register over USB, to an Arduino. No, it’s not fast, but it doesn’t matter at all for our use here.

The MBI5026 Data Sheet includes the timing diagram I used when figuring out how to send the clock signals and data. Note that the actual period of these clock pulses isn’t important, it’s the relative timing/order of the signals that counts. The MBI5026 can be clocked at up to 25MHz.

image11

Using that information, I was able to prototype using regular old LEDs on a breadboard. I didn’t do all 26, but I did a couple at the beginning and a couple at the end to ensure I didn’t have any off-by-one errors or similar.

Next, I needed to scale it up to a real wall. We’ll cover that in the next post, before we finish with some speech recognition and natural language processing.

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on twitter @pete_brown

Most of all, thanks for reading!

The “Internet of Stranger Things” Wall, Part 2 – Wall Construction and Music

$
0
0

Overview

I do a lot of woodworking and carpentry at home. Much to my family’s chagrin, our house is in a constant state of flux. I tend to subscribe to the Norm Abram school of woodworking, where there are tools and jigs for everything. Because of this, I have a lot of woodworking and carpentry tools around. It’s not often I get to use them for my day job, but I found just the way to do it.

In part one of this series, I covered how to use Windows Remote Wiring to wire up LEDs to an Arduino, and control from a Windows 10 UWP app. In this post, we’ll get to constructing the actual wall.

This post covers:

  • Constructing the Wall
  • Music and UWP MIDI
  • Fonts and Title Style

The remainder of the series will be posted this week. Once they are up, you’ll be able to find the other posts here:

  • Part 1 – Introduction and Remote Wiring
  • Part 2 – Constructing the wall and adding Music (this post)
  • Part 3 – Adding voice recognition and intelligence

If you’re not familiar with the wall, please go back and read part 1 now. In that, I described the inspiration for this project, as well as the electronics required.

Constructing the Wall

In the show Stranger Things, “the wall” that’s talked about is an actual wall in a living room. For this version, I considered a few different sizes for the wall. It had to be large enough to be easily visible during a keynote and other larger-room presentations, but small enough that I could fit it in the back of the van, or pack in a special box to (expensively) ship across the country. That meant it couldn’t be completely spread out like the one in the TV show. But at the same time, the letters still had to be large enough so that they looked ok next to the full-size Christmas lights.

Finally, I didn’t want any visible seams in the letter field, or anything that would need to be rewired or otherwise modified to set it up. Seams are almost impossible to hide well once a board has traveled a bit. Plus, demo and device-heavy keynote setup is always very time-constrained, so I needed to make sure I could have the whole thing set up in just a few minutes. Whenever I come to an event, the people running it are stunned by the amount of stuff I put on a table. I typically fill a 2×8 table with laptops, devices, cameras, and more.

I settled on using a 4’ x 4’ sheet of ½” plywood as the base, with poplar from the local home store as reinforcement around the edges. I bisected the plywood sheet to 32” and 16” to make it easier to ship and also so it would easily fit in the back of the family van for the first event we drove to.

The wallpapered portion of the wall ended up being 48” wide and 32” tall. The remaining paneled portion is just under 16” tall. The removable bottom part turned out to be quite heavy, so I left it off when shipping to Las Vegas for DEVintersection.

To build the bottom panel, I considered getting a classic faux wood panel from the local Home Depot and cutting it to size for this. But I really didn’t want a whole 4×8 sheet of fake wood paneling laying around an already messy shop. So instead I used left-over laminate flooring from my laundry room remodel project and cut it to length. Rather than snap the pieces tight together, I left a gap, and then painted the gaps black to give it that old 70s/80s paneling look.

picture1

picture2

The size of this version of the wall does constrain the design a bit. I didn’t try to match the same layout that the letters had in the show, except for having the right letters on the right row. The wall in the show is spaced out enough that you could easily fill a full 4×8 sheet and still look a bit cramped.

The most time-consuming part of constructing the wall was finding appropriately ugly wallpaper. Not surprisingly, a search for “ugly wallpaper” doesn’t generally bring up items for sale :). In the end, I settled for something that was in roughly the same ugliness class as the show wallpaper, but nowhere near an actual match. If you use the wallpaper I did, I suggest darkening it a bit with a tea stain or something similar. As-is, it’s a bit too bright.

Note that the price has gone up significantly since I bought it (perhaps I started an ugly wallpaper demand trend?), so I encourage you to look for other sources. If you find a source for the exact wallpaper, please do post it in the comments below!

Another option, of course, is to use your art skills and paint the “wallpaper” manually. It might actually be easier than hanging wallpaper on plywood, which as it turns out, is not as easy as it sounds. In any case, do the hanging in your basement or some other place that will be ok with getting wet and glued-up.

Here it is with my non-professional wallpaper job. It may look like I’m hanging some ugly sheets out to dry, but this is wallpaper on plywood.

picture3

When painting the letters on the board, I divided the area in three sections vertically, and used a leftover piece of flooring as a straight edge. That helped there, but didn’t do anything for my letter spacing / kerning.

To keep the paint looking messy, I used a cheap 1” chip brush as the paint brush. I dabbed on a bit extra in a few places to add drips, and went back over any areas that didn’t come out quite the way I wanted, like the letter “G.”

picture4

Despite measuring things out, I ran out of room when I got to WXYZ and had to squish things together a bit. I blame all the white space around the “V”. There’s a bit of a “Castle of uuggggggh” thing going on at the end of the painted alphabet.

picture5

Once the painting was complete, I used some pre-colored corner and edge trim to cover the top and bottom and make it look a bit more like the show. I attached most trim with construction glue and narrow crown staples (and cleaned up the glue after I took the above photo). If you want to be more accurate and have the time, use dark stained pine chair rail on the bottom edge, between the wallpapered section and the paneled section.

Here you can see the poplar one-by support around the edges of the plywood. I used a combination of 1×3 and 1×4 that I had around my shop. Plywood, especially plywood soaked with wallpaper paste, doesn’t like to stay flat. For that reason, as well as for shipping reasons, the addition of the poplar was necessary.

picture6

You can see some of the wiring in this photo, so let’s talk about that.

Preparing and Wiring the Christmas lights

There are two important things to know about the Christmas lights:

  1. They are LEDs, not incandescent lamps.
  2. They are not actually wired in a string, but are instead individually wired to the control board.

I used normal 120v AC LED lights. LEDs, like regular incandescent lamps, don’t really care about AC or DC, so it’s easy enough to find LEDs to repurpose for this project. I just had to pick ones which didn’t have a separate transformer or anything odd like that. Direct 120v plug-in only.

The LED lights I sacrificed for this project are Sylvania Stay-Lit Platinum LED Indoor/Outdoor C9 Multi-Colored Christmas Lights. They had the right general size and look. I purchased two packs for this because I was only going to use the colors actually used on the show and also because I wanted to have some spares for when the C9 housings were damaged in transit, or when I blew out an LED or two.

There are almost certainly other brands that will work, as long as they are LED C9 lamps and the wires are wrapped in a way that you can unravel.

When preparing the lamps, I cut the wires approximately halfway between the two lamps. I also discarded any lamps which had three wires going into them, as I didn’t want to bother trying to wire those up. Additionally, I discarded any of the lumps in the wires where fuses or resistors were kept.

picture7

For one evening, my desk was completely covered in severed LED Christmas lamps.

Next, I figured out the polarity of the LED leads and marked them with black marker. It’s important to know the anode from the cathode here because wiring in reverse will both fail to work, and likely burn out the LED, making subsequent trials also fail. Through trial and error, I found the little notch on the inside of the lamp always pointed in the same way, and that it was in the same position relative to the outside clip.

Once marked, I took note of the colors used on the show and following the same letter/color pairings, drilled an approximately ¼” hole above each letter and inserted both wires for the appropriate colored lamp through to the back. Friction held them in place until I could come through with the hot glue gun and permanently stick them there.

From there, I linked each positive (anode) wire on the LEDs together by twisting the wires together with additional lengths of wire and taping over them with electrical tape. The wire I used here was spare wire from the light string. This formed one continuous string connecting all the LED anodes together.

Next, I connected the end of that string to the +3.3v output on the Arduino. 3.3v is plenty to run these LEDs. The connection is not obvious in the photos, but I used a screw on the side of the electronics board and wired one end to the Arduino and the other end to the light string.

Finally, I wired the negative (cathode) wires to their individual terminals on the electronics board. I used a spool of heavier stranded wire here that would hold up to twisting around the screw terminals. For speed, I used wire nuts to connect those wires to the cathode wire on the LED. That’s all the black wire you see in this photo.

picture8

To make it look like one string of lights, I ran a twisted length of the Christmas light wire pairs (from the same light kit) through the clips on each lamp. I didn’t use hot glue here, but just let it go where it wanted. The effect is such that it looks like one continuous strand of Christmas lights; you only see the wires going into the wall if you look closely.

picture9

I attached the top and bottom together using 1×3 maple boards that I simply screwed to both the top and bottom, and then disassembled when I wanted to tear it down.

gif1

The visuals were all done at that point. I could have stopped there, but one of my favorite things about Stranger Things is the soundtrack. Given that a big part of my job at Microsoft is working with musicians and music app developers, and with the team which created the UWP MIDI API, I knew I had to incorporate that into this project.

Music / MIDI

A big part of the appeal of Stranger Things is the John Carpenter-style mostly analog synthesizer soundtrack by the band Survive (with some cameos by Tangerine Dream). John Carpenter, Klaus Shulze and Tangerine Dream have always been favorites of mine, and I can’t help but feel a shiver when I hear a good fat synth-driven soundtrack. They have remained my inspiration when recording my own music.

So, it would have been just wrong of me to do the demo of the wall without at least some synthesizer work in the background. Playing it live was not an option and I wasn’t about to bring a huge rig, so I sequenced the main arpeggio and kick drum in my very portable Elektron Analog Four using some reasonable stand-ins for the sounds.

At the events, I would start and clock the Analog Four using a button on the app and my Windows 10 UWP MIDI Library clock generator. The only lengthy part of this code is where I check for the Analog Four each time. That’s a workaround because my MIDI library, at the time of this writing, doesn’t expose the hardware add/remove event. I will fix that soon.


private void StartMidiClock()
{
    // I do this every time rather than listen for device add/remove
    // becuase my library didn't raise the add/remove event in this version
    SelectMidiOutputDevices();

    _midiClock.Start();

    System.Diagnostics.Debug.WriteLine("MIDI started");
}

private void StopMidiClock()
{
    _midiClock.Stop();

    System.Diagnostics.Debug.WriteLine("MIDI stopped");
}


private const string _midiDeviceName = "Analog Four";
private async void SelectMidiOutputDevices()
{
    _midiClock.OutputPorts.Clear();

    IMidiOutPort port = null;

    foreach (var descriptor in _midiWatcher.OutputPortDescriptors)
    {
        if (descriptor.Name.Contains(_midiDeviceName))
        {
            port = await MidiOutPort.FromIdAsync(descriptor.Id);

            break;
        }
    }

    if (port != null)
    {
        _midiClock.OutputPorts.Add(port);
    }
}

For this code to work, I just set the Analog Four to receive MIDI clock and MIDI start/stop messages on the USB port. The sequence itself is already programmed in by me, so all I need to do is kick it off.

If you want to create a version of the sequence yourself, the main riff is a super simple up/down arpeggio of these notes:

picture10

You can vamp on top of that to bring in more of the sound from what S U R V I V E made. I left it as it was and simply played the filter knob a bit to bring it in. A short version of that may be found on my personal SoundCloud profile here.

There are many other components to the music, including a muted kick drum type of sound, a bass line, some additional melody and some other interesting effects, but I hope this helps get you started.

If you’re interested in the synthesizers behind the music, and a place to hear the music itself, check out this tour of S U R V I V E ’s studio.

The final thing that I needed to include here was a nod to the visual style of the opening sequence of the show.

Fonts and Title Style

If you want to create your own title card in a style similar to the show, the font ITC Benguiat is either the same one used, or a very close match. It’s readily available to anyone who wants to license it. I licensed it from Fonts.com for $35 for my own project. The version I ended up using was the regular book font, but I think the Condensed Bold is probably a closer fit.

Even though there are tons of pages, sites, videos, etc. using the title style, be careful about what you do here, though, as you don’t want to infringe on the show’s trademarks or other IP. When in doubt, consult your lawyer. I did.

picture11

That’s using just the outline and glow text effects. You can do even better in Adobe Photoshop, especially if you add in some lighting effects, adjust the character spacing and height, and use large descending capital letters, like I did at the first event. But I was able to quickly do this above mockup in PowerPoint using the ITC Benguiat font.

If you don’t want to license a font and then work with the red glow in Adobe Photoshop, you can also create simple versions of the title card at http://makeitstranger.com/

None of that is required for the wall itself, but can help tie things together if you are presenting several related and themed demos like I did. Consider it a bit of polish.

With that, we have the visuals and sound all wrapped up. You could use the wall as-is at this point, simply giving it text to display. That’s not quite enough for what I wanted to show, though. Next up, we need to give the bot a little intelligence, and save on some typing.

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on twitter @pete_brown

Most of all, thanks for reading!

Getting personal – speech and inking (App Dev on Xbox series)

$
0
0

The way users interact with apps on different devices has gotten much more personal lately, thanks to a variety of new Natural User Interface features in the Universal Windows Platform. These UWP patterns and APIs are available for developers to easily bring in capabilities for their apps that enable more human technologies. For the final blog post in the series, we have extended the Adventure Works sample to add support for Ink on devices that support it, and to add support for speech interaction where it makes sense (including both synthesis and recognition). Make sure to get the updated code for the Adventure Works Sample from the GitHub repository so you can refer to it as you read on.

And in case you missed the blog post from last week on how to enable great social experiences, we covered how to connect your app to social networks such as Facebook and Twitter, how to enable second screen experiences through Project “Rome”, and how to take advantage of the UWP Maps control and make your app location aware. To read last week’s blog post or any of the other blog posts in the series, or to watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works (v3)

picture1

We are continuing to build on top of the Adventure Works sample app we worked with in the previous two blog posts. If you missed those, make sure to check them out here and here. As a reminder, Adventure Works is a social photo app that allows the user to:

  • Capture, edit, and store photos for a specific trip
  • auto analyze and auto tag friends using Cognitive Services vision APIs
  • view albums from friends on an interactive map
  • share albums on social networks like Facebook and Twitter
  • Use one device to remote control slideshows running on another device using project Rome
  • and more …

There is always more to be done, and for this final round of improvements we will focus on two sets of features:

  1. Ink support to annotate images, enable natural text input, as well as the ability to use inking as a presentation tool in connected slideshow mode.
  2. Speech Synthesis and Speech Recognition (with a little help from cognitive services for language understanding) to create a way to quickly access information using speech.

More Personal Computing with Ink

Inking in Windows 10 allows users with Inking capable devices to draw and annotate directly on the screen with a device like the Surface Pen – and if you don’t have a pen handy, you can use your finger or a mouse instead. Windows 10 built-in apps like Sticky Notes, Sketchpad and Screen sketch support inking, as do many Office products. Besides preserving drawings and annotations, inking also uses machine learning to recognize and convert ink to text. OneNote goes a step further by recognizing shapes and equations in addition to text.

picture2

Best of all, you can easily add Inking functionality into your own apps, as we did for Adventure Works,  with one line of XAML markup to create an InkCanvas. With just one more line, you can add an InkToolbar to your canvas that provides a color selector as well as buttons for drawing, erasing, highlighting, and displaying a ruler. (In case you have the Adventure Works project open, the InkCanvas and InkToolbar implementation can be found in PhotoPreviewView.)


<InkCanvas x:Name=”Inker”></InkCanvas>
<InkToolbar TargetInkCanvas=”{x:Bind Inker}” VerticalAlignment=”Top”/>

The InkCanvas allows users to annotate their Adventure Works slideshow photos. This can be done both directly as well as remotely through the Project “Rome” code highlighted in the previous post. When done on the same device, the ink strokes are saved off to a GIF file which is then associated with the original slideshow image.

picture3

When the image is displayed again during later viewings, the strokes are extracted from the GIF file, as shown in the code below, and inserted back into a canvas layered on top of the image in PhotoPreviewView. The code for saving and extracting ink strokes are found in the InkHelpers class.


var file = await StorageFile.GetFileFromPathAsync(filename);
if (file != null)
{
    using (var stream = await file.OpenReadAsync())
    {
        inker.InkPresenter.StrokeContainer.Clear();
        await inker.InkPresenter.StrokeContainer.LoadAsync(stream);
    }
}

Ink strokes can also be drawn on one device (like a Surface device) and displayed on another one (an Xbox One). In order to do this, the Adventure Works code actually collects the user’s pen strokes using the underlying InkPresenter object that powers the InkCanvas. It then converts the strokes into a byte array and serializes them over to the remote instance of the app. You can find out more about how this is implemented in Adventure Works by looking through the GetStrokeData method in SlideshowSlideView control and the SendStrokeUpdates method in SlideshowClientPage.

It is sometimes useful to save the ink strokes and original image in a new file. In Adventure Works, this is done to create a thumbnail version of an annotated slide for quick display as well as for uploading to Facebook. You can find the code used to combine an image file with an ink stroke annotation in the RenderImageWithInkToFIleAsync method in the InkHelpers class. It uses the Win2D DrawImage and DrawInk methods of a CanvasDrawingSession object to blend the two together, as shown in the snippet below.


CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, (int)inker.ActualWidth, (int)inker.ActualHeight, 96);

var image = await CanvasBitmap.LoadAsync(device, imageStream);
using (var ds = renderTarget.CreateDrawingSession())
{
    var imageBounds = image.GetBounds(device);

    //...

    ds.Clear(Colors.White);
    ds.DrawImage(image, new Rect(0, 0, inker.ActualWidth, inker.ActualWidth), imageBounds);
    ds.DrawInk(inker.InkPresenter.StrokeContainer.GetStrokes());
}

Ink Text Recognition

picture4

Adventure Works also takes advantage of Inking’s text recognition feature to let users handwrite the name of their newly created Adventures. This capability is extremely useful if someone is running your app in tablet mode with a pen and doesn’t want to bother with the onscreen keyboard. Converting ink to text relies on the InkRecognizer class. Adventure Works encapsulates this functionality in a templated control called InkOverlay which you can reuse in your own code. The core implementation of ink to text really just requires instantiating an InkRecognizerContainer and then calling its RecognizeAsync method.


var inkRecognizer = new InkRecognizerContainer();
var recognitionResults = await inkRecognizer.RecognizeAsync(_inker.InkPresenter.StrokeContainer, InkRecognitionTarget.All);

You can imagine this being very powerful when the user has a large form to fill out on a tablet device and they don’t have to use the onscreen keyboard.

More Personal Computing with Speech

There are two sets of APIs that are used in Adventure Works that enable a great natural experience using speech. First, UWP speech APIs allow developers to integrate speech-to-text (recognition) and text-to-speech (synthesis) into their UWP apps. Speech recognition converts words spoken by the user into text for form input, for text dictation, to specify an action or command, and to accomplish tasks. Both free-text dictation and custom grammars authored using Speech Recognition Grammar Specification are supported.

Second, Language Understanding Intelligent Service (LUIS) is a Microsoft Cognitive Services API that uses machine learning to help your app figure out what people are trying to say. For instance, if someone wants to order food, they might say “find me a restaurant” or “I’m hungry” or “feed me”. You might try a brute force approach to recognize the intent to order food, listing out all the variations on the concept “order food” that you can think of – but of course you’re going to come up short. LUIS lets you set up a model for the “order food” intent that learns, over time, what people are trying to say.

In Adventure Works, these features are combined to create a variety of speech related functionalities. For instance, the app can listen for an utterance like “Adventure Works, start my latest slideshow” and it will naturally open a slideshow for you when it hears this command. It can also respond using speech when appropriate to answer a question. LUIS, in turn, augments this speech recognition with language understanding to improve the recognition of natural language phrases.

picture5

The speech capabilities for our app are wrapped in a simple assistant called Adventure Works Aide (look for AdventureWorksAideView.xaml). Saying the phrase “Adventure Works…” will invoke it. It will then listen for spoken patterns such as:

  • “What adventures are in <location>.”
  • “Show me <person> adventure.”
  • “Who is closes to me.”

Adventure Works Aide is powered by a custom SpeechService class. There are two SpeechRecognizer instances that are used at different times, first to recognize the “Adventure Works” phrase at any time:


_continousSpeechRecognizer = new SpeechRecognizer();
_continousSpeechRecognizer.Constraints.Add(new SpeechRecognitionListConstraint(new List<String>() { "Adventure Works" }, "start"));
var result = await _continousSpeechRecognizer.CompileConstraintsAsync();
//...
await _continousSpeechRecognizer.ContinuousRecognitionSession.StartAsync(SpeechContinuousRecognitionMode.Default);
and then to understand free form natural language and convert it to text:
_speechRecognizer = new SpeechRecognizer();
var result = await _speechRecognizer.CompileConstraintsAsync();
SpeechRecognitionResult speechRecognitionResult = await _speechRecognizer.RecognizeAsync();
if (speechRecognitionResult.Status == SpeechRecognitionResultStatus.Success)
{
    string str = speechRecognitionResult.Text;
}

As you can see, the SpeechRecognizer API is used for both listening continuously for specific constraints throughout the lifetime of the app, or to convert any free-form speech to text at a specific time. The continuous recognition session can be set to recognize phrases from a list of strings, or it can even use a more structured SRGS grammar file which provides the greatest control over the speech recognition by allowing for multiple semantic meanings to be recognized at once. However, because we want to understand every variation the user might say and use LUIS for our semantic understanding, we can use the free-form speech recognition with the default constraints.

Note: before using any of the speech APIs on Xbox, the user must give permission to your application to access the microphone. Not all APIs automatically show the dialog currently so you will need to invoke the dialog yourself. Checkout the CheckForMicrophonePermission function in SpeechService.cs to see how this is done in Adventure Works.

When the continuous speech recognizer recognizes the key phrase, it immediately stops listening, shows the UI for the AdventureWorksAide to let the user know that it’s listening, and starts listening for natural language.


await _continousSpeechRecognizer.ContinuousRecognitionSession.CancelAsync();
ShowUI();
SpeakAsync("hey!");
var spokenText = await ListenForText();

Subsequent utterances are passed on to LUIS which uses training data we have provided to create a machine learning model to identify specific intents. For this app, we have three different intents that can be recognized: showuser, showmap, and whoisclosest (but you can always add more). We have also defined an entity for username for LUIS to provide us with the name of the user when the showuser intent has been recognized. LUIS also provides several pre-built entities that have been trained for specific types of data; in this case, we are using an entity for geography locations in the showmap intent.

picture6

To use LUIS in the app, we used the official nugget library which allowed us to register specific handlers for each intent when we send over a phrase.


var handlers = new LUISIntentHandlers();
_router = IntentRouter.Setup(Keys.LUISAppId, Keys.LUISAzureSubscriptionKey, handlers, false);
var handled = await _router.Route(text, null);

Take a look at the HandleIntent method in the LUISAPI.cs file and the LUISIntentHandlers class which handles each intent defined in the LUIS portal, and is a useful reference for future LUIS implementations.

Finally, once the text has been processed by LUIS and the intent has been processed by the app, the AdventureWorksAide might need to respond back to the user using speech, and for that, the SpeechService uses the SpeechSynthesizer API:


_speechSynthesizer = new SpeechSynthesizer();
var syntStream = await _speechSynthesizer.SynthesizeTextToStreamAsync(toSpeak);
_player = new MediaPlayer();
_player.Source = MediaSource.CreateFromStream(syntStream, syntStream.ContentType);
_player.Play();

The SpeechSynthesizer API can specify a specific voice to use for the generation based on voices installed on the system, and it can even use SSML (speech synthesis markup language) to control how the speech is generated, including volume, pronunciation, and pitch.

The entire flow, from invoking the Adventure Works Aide to sending the spoken text to LUIS, and finally responding to the user is handled in the WakeUpAndListen method.

There’s more

Though not used in the current version of the project, there are other APIs that you can take advantage of for your apps, both as part of the UWP platform and as part of Cognitive Services.

For example, on desktop and mobile device, Cortana can recognize speech or text directly from the Cortana canvas and activate your app or initiate an action on behalf of your app. It can also expose actions to the user based on insights about them, and with user permission it can even complete the action for them. Using a Voice Command Definition (VCD) file, developers have the option to add commands directly to the Cortana command set (commands like: “Hey Cortana show adventure in Europe in Adventure Works”). Cortana app integration is also part of our long-term plans for voice support on Xbox, even though it is not supported today. Visit the Cortana portal for more info.

In addition, there are several speech and language related Cognitive Services APIs that are simply too cool not to mention:

  • Custom Recognition Service – Overcomes speech recognition barriers like speaking style, background noise, and vocabulary.
  • Speaker Recognition – Identify individual speakers or use speech as a means of authentication with the Speaker Recognition API.
  • Linguistic Analysis – Simplify complex language concepts and parse text with the Linguistic Analysis API.
  • Translator – Translate speech and text with a simple REST API call.
  • Bing Spell Check – Detect and correct spelling mistakes within your app.

The more personal computing features provided through Cognitive Services is constantly being refreshed, so be sure to check back often to see what new machine learning capabilities have been made available to you.

That’s all folks

This was the last blog post (and sample app) in the App Dev on Xbox series, but if you have a great idea that we should cover, please just let us know, we are always looking for cool app ideas to build and features to implement. Make sure to check out the app source on our official GitHub repository, read through some of the resources provided, read through some of the other blog posts or watch the event if you missed it, and let us know what you think through the comments below or on twitter.

Happy coding!

Resources

Previous Xbox Series Posts


The “Internet of Stranger Things” Wall, Part 3 – Voice Recognition and Intelligence

$
0
0

Overview

I called this project the “Internet of Stranger Things,” but so far, there hasn’t been an internet piece. In addition, there really hasn’t been anything that couldn’t be easily accomplished on an Arduino or a Raspberry Pi. I wanted this demo to have more moving parts to improve the experience and also demonstrate some cool technology.

First is voice recognition. Proper voice recognition typically takes a pretty decent computer and a good OS. This isn’t something you’d generally do on an Arduino alone; it’s simply not designed for that kind of workload.

Next, I wanted to wire it up to the cloud, specifically to a bot. The interaction in the show is a conversation between two people, so this was a natural fit. Speaking of “natural,” I wanted the bot to understand many different forms of the questions, not just a few hard-coded questions. For that, I wanted to use the Language Understanding Intelligent Service (LUIS) to handle the parsing.

This third and final post covers:

  • Adding Windows Voice Recognition to the UWP app
  • Creating the natural language model in LUIS
  • Building the Bot Framework Bot
  • Tying it all together

You can find the other posts here:

If you’re not familiar with the wall, please go back and read part one now. In that, I describe the inspiration for this project, as well as the electronics required.

Adding Voice Recognition

In the TV show, Joyce doesn’t type her queries into a 1980s era terminal to speak with her son; she speaks aloud in her living room. I wanted to have something similar for this app, and the built-in voice recognition was a natural fit.

Voice recognition in Windows 10 UWP apps is super-simple to use. You have the option of using the built-in UI, which is nice but may not fit your app style, or simply letting the recognition happen while you handle events.

There are good samples for this in the Windows 10 UWP Samples repo, so I won’t go into great detail here. But I do want to show you the code.

To keep the code simple, I used two recognizers. One is for basic local echo testing, especially useful if connectivity in a venue is unreliable. The second is for sending to the bot. You could use a single recognizer and then just check some sort of app state in the events to decide if you were doing something for local echo or for the bot.

First, I initialized the two recognizers and wired up the two events that I care about in this scenario.


SpeechRecognizer _echoSpeechRecognizer;
SpeechRecognizer _questionSpeechRecognizer;

private async void SetupSpeechRecognizer()
{
    _echoSpeechRecognizer = new SpeechRecognizer();
    _questionSpeechRecognizer = new SpeechRecognizer();

    await _echoSpeechRecognizer.CompileConstraintsAsync();
    await _questionSpeechRecognizer.CompileConstraintsAsync();

    _echoSpeechRecognizer.HypothesisGenerated +=
                   OnEchoSpeechRecognizerHypothesisGenerated;
    _echoSpeechRecognizer.StateChanged +=
                   OnEchoSpeechRecognizerStateChanged;

    _questionSpeechRecognizer.HypothesisGenerated +=
                   OnQuestionSpeechRecognizerHypothesisGenerated;
    _questionSpeechRecognizer.StateChanged +=
                   OnQuestionSpeechRecognizerStateChanged;

}

The HypothesisGenerated event lets me show real-time recognition results, much like when you use Cortana voice recognition on your PC or phone. In that event handler, I just display the results. The only real purpose of this is to show that some recognition is happening in a way similar to how Cortana shows that she’s listening and parsing your words. Note that the hypothesis and the state events come back on a non-UI thread, so you’ll need to dispatch them like I did here.


private async void OnEchoSpeechRecognizerHypothesisGenerated(
        SpeechRecognizer sender,
        SpeechRecognitionHypothesisGeneratedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        EchoText.Text = args.Hypothesis.Text;
    });
}

The next is the StateChanged event. This lets me alter the UI based on what is happening. There are lots of good practices here, but I took an expedient route and simply changed the background color of the text box. You might consider running an animation on the microphone or something when recognition is happening.


private SolidColorBrush _micListeningBrush =
                     new SolidColorBrush(Colors.SkyBlue);
private SolidColorBrush _micIdleBrush =
                     new SolidColorBrush(Colors.White);

private async void OnEchoSpeechRecognizerStateChanged(
        SpeechRecognizer sender,
        SpeechRecognizerStateChangedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        switch (args.State)
        {
            case SpeechRecognizerState.Idle:
                EchoText.Background = _micIdleBrush;
                break;

            default:
                EchoText.Background = _micListeningBrush;
                break;
        }
    });
}

I have equivalent handlers for the two events for the “ask a question” speech recognizer as well.

Finally, some easy code in the button click handler kicks off recognition.


private async void DictateEcho_Click(object sender, RoutedEventArgs e)
{
    var result = await _echoSpeechRecognizer.RecognizeAsync();

    EchoText.Text = result.Text;
}

The end result looks and behaves well. The voice recognition is really good.

gif1

So now we can talk to the board from the UWP PC app, and we can talk to the app using voice. Time to add just a little intelligence behind it all.

Creating the Natural Language Model in LUIS

The backing for the wall is a bot in the cloud. I wanted the bot to be able to answer questions, but I didn’t want to have the exact text of the question hard-coded in the bot. If I wanted to hard-code them, a simple web service or even local code would do.

What I really want is the ability to ask questions using natural language, and map those questions (or Utterances as called in LUIS) to specific master questions (or Intents in LUIS). In that way, I can ask the questions a few different ways, but still get back an answer that makes sense. My colleague, Ryan Volum, helped me figure out how LUIS worked. You should check out his Getting Started with Bots Microsoft Virtual Academy course.

So I started thinking about the types of questions I wanted answered, and the various ways I might ask them.

For example, when I want to know the location of where Will is, I could ask, “Where are you hiding?” or “Tell me where you are!” or “Where can I find you?” When checking to see if someone is listening, I might ask, “Are you there?” or “Can you hear me?” As you can imagine, hard-coding all these variations would be tedious, and would certainly miss out on ways someone else might ask the question.

I then created those in LUIS with each master question as an Intent, and each way I could think of asking that question then trained as an utterance mapped to that intent. Generally, the more utterances I add, the better the model becomes.

picture1

The above screen shot is not the entire list of Intents; I added a number of other Intents and continued to train the model.

For a scenario such as this, training LUIS is straight forward. My particular requirements didn’t include any entities or Regex, or any connections to a document database or Azure search. If you have a more complex dialog, there’s a ton of power in LUIS to be able to make the model as robust as you need, and to also train it with errors and utterances found in actual use. If you want to learn more about LUIS, I recommend watching Module 5 in the Getting Started with Bots MVA.

Once my LUIS model was set up and working, I needed to connect it to the bot.

Building the Bot Framework Bot

The bot itself was the last thing I added to the wall. In fact, in my first demo of the wall, I had to type the messages in to the app instead of sending it out to a bot. Interesting, but not exactly what I was looking for.

I used the generic Bot Framework template and instructions from the Bot Framework developer site. This creates a generic bot, a simple C# web service controller, which echoes back anything you send it.

Next, following the Bot Framework documentation, I integrated LUIS into the bot. First, I created the class which derived from LuisDialog, and added in code to handle the different intents. Note that this model is changing over time; there are other ways to handle the intents using recognizers. For my use, however, this approach worked just fine.

The answers from the bot are very short, and I keep no context. Responses from the Upside Down need to be short enough to light up on the wall without putting everyone to sleep reading a long dissertation letter by letter.


namespace TheUpsideDown
{
    // Reference:
    // https://docs.botframework.com/en-us/csharp/builder/sdkreference/dialogs.html

    // Partial class is excluded from project. It contains keys:
    //
    // [Serializable]
    // [LuisModel("model id", "subscription key")]
    // public partial class UpsideDownDialog
    // {
    // }
    //
    public partial class UpsideDownDialog : LuisDialog<object>
    {
        // None
        [LuisIntent("")]
        public async Task None(IDialogContext context, LuisResult result)
        {
            string message = $"Eh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }


        [LuisIntent("CheckPresence")]
        public async Task CheckPresence(IDialogContext context, LuisResult result)
        {
            string message = $"Yes";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("AskName")]
        public async Task AskName(IDialogContext context, LuisResult result)
        {
            string message = $"Will";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("FavoriteColor")]
        public async Task FavoriteColor(IDialogContext context, LuisResult result)
        {
            string message = $"Blue ... no Gr..ahhhhh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("WhatIShouldDoNow")]
        public async Task WhatIShouldDoNow(IDialogContext context, LuisResult result)
        {
            string message = $"Run";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        ...

    }
}

Once I had that in place, it was time to test. The easiest way to test before deployment is to use the Bot Framework Channel Emulator.

First, I started the bot in my browser from Visual Studio. Then, I opened the emulator and plugged in the URL from the project properties, and cleared out the credentials fields. Next, I started typing in questions that I figured the bot should be able to handle.

picture2

It worked great! I was pretty excited, because this was the first bot I had ever created, and not only did it work, but it also had natural language processing. Very cool stuff.

Now, if you notice in the picture, there are red circles on every reply. It took a while to figure out what was up. As it turns out, the template for the bot includes an older version of the NuGet bot builder library. Once I updated that to the latest version (3.3 at this time), the “Invalid Token” error local IIS was throwing went away.

Be sure to update the bot builder library NuGet package to the latest version.

Publishing and Registering the Bot

Next, it was time to publish it to my Azure account so I could use the Direct Line API from my client app, and also so I could make the bot available via other channels. I used the built-in Visual Studio publish (right click the project, click “Publish”) to put it up there. I had created the Azure Web App in advance.

picture3

Next, I registered the bot on the Bot Framework site. This step is necessary to be able to use the Direct Line API and make the bot visible to other channels. I had some issues getting it to work at first, because I didn’t realize I needed to update the credential information in the web.config of the bot service. The BotId field in the web.config can be most anything. Most tutorials skip telling you what to put in that field, and it doesn’t match up with anything on the portal.

picture4

As you can see, there are a few steps involved in getting the bot published and registered. For the Azure piece, follow the same steps as you would for any Web App. For the bot registration, be sure to follow the instructions carefully, and keep track of your keys, app IDs, and passwords. Take your time the first time you go through the process.

You can see in the previous screen shot that I have a number of errors shown. Those errors were because of that NuGet package version issue mentioned previously. It wasn’t until I had the bot published that I realized there was an error, and went back and debugged it locally.

Testing the Published Bot in Skype

I published and registered the bot primarily to be able to use the Direct Line channel. But it’s a bot, so it makes sense to test it using a few different channels. Skype is a pretty obvious one, and is enabled by default, so I hit that first.

picture5

Through Skype, I was able to verify that it was published and worked as expected.

Using the Direct Line API

When you want to communicate to a bot from code, a good way to do it is using the Direct Line API. This REST API provides an additional layer of authentication and keeps everything within a structured bot framework. Without it, you might as well just make direct REST calls.

First, I needed to enable the Direct Line channel in the bot framework portal. Once I did that, I was able to configure it and get the super-secret key which enables me to connect to the bot. (The disabled field was a pain to try and copy/paste, so I just did a view source, and grabbed the key from the HTML.)

picture6

That’s all I needed to do in the portal. Next, I needed to set up the client to speak to the Direct Line API.

First, I added the Microsoft.Bot.Connector.DirectLine NuGet package to the UWP app. After that, I wrote a pretty small amount of code for the actual communication. Thanks to my colleague, Shen Chauhan (@shenchauhan on Twitter), for providing the boilerplate in his Hunt the Wumpus app.


private const string _botBaseUrl = "(the url to the bot /api/messages)";
private const string _directLineSecret = "(secret from direct line config)";


private DirectLineClient _directLine;
private string _conversationId;


public async Task ConnectAsync()
{
    _directLine = new DirectLineClient(_directLineSecret);

    var conversation = await _directLine.Conversations
            .NewConversationWithHttpMessagesAsync();
    _conversationId = conversation.Body.ConversationId;

    System.Diagnostics.Debug.WriteLine("Bot connection set up.");
}

private async Task<string> GetResponse()
{
    var httpMessages = await _directLine.Conversations
                  .GetMessagesWithHttpMessagesAsync(_conversationId);

    var messages = httpMessages.Body.Messages;

    // our bot only returns a single response, so we won't loop through
    // First message is the question, second message is the response
    if (messages?.Count > 1)
    {
        // select latest message -- the response
        var text = messages[messages.Count-1].Text;
        System.Diagnostics.Debug.WriteLine("Response from bot was: " + text);

        return text;
    }
    else
    {
        System.Diagnostics.Debug.WriteLine("Response from bot was empty.");
        return string.Empty;
    }
}


public async Task<string> TalkToTheUpsideDownAsync(string message)
{
    System.Diagnostics.Debug.WriteLine("Sending bot message");

    var msg = new Message();
    msg.Text = message;


    await _directLine.Conversations.PostMessageAsync(_conversationId, msg);

    return await GetResponse();
}

The client code calls the TalkToTheUpsideDownAsync method, passing in the question. That method fires off the message to the bot, via the Direct Line connection, and then waits for a response.

Because the bot sends only a single message, and only in response to a question, the response comes back as two messages: the first is the message sent from the client, the second is the response from the service. This helps to provide context.

Finally, I wired it to the SendQuestion button on the UI. I also wrapped it in calls to start and stop the MIDI clock, giving us a bit of Stranger Things thinking music while the call is being made and the result displayed on the LEDs.


private async void SendQuestion_Click(object sender, RoutedEventArgs e)
{
    // start music
    StartMidiClock();

    // send question to service
    var response = await _botInterface.TalkToTheUpsideDownAsync(QuestionText.Text);

    // display answer
    await RenderTextAsync(response);

    // stop music
    StopMidiClock();
}

With that, it is 100% complete and ready for demos!

What would I change?

If I were to start this project anew today and had a bit more time, there are a few things I might change.

I like the voice recognition, Bot Framework, and LUIS stuff. Although I could certainly make the conversation more interactive, there’s really nothing I would change there.

On the electronics, I would use a breadboard-friendly Arduino, not hot-glue an Arduino to the back. It pains me to have hot-glued the Arduino to the board, but I was in a hurry and had the glue gun at hand.

I would also use a separate power supply for LEDs. This is especially important if you wish to light more than one LED at a time, as eventually, the Arduino will not be able to support the current draw required by many LED lights.

If I had several weeks, I would have my friends at DF Robot spin a board that I design, rather than use a regular breadboard, or even a solder breadboard. I generally prefer to get boards spun for projects, as they are more robust, and DF Robot can do this for very little cost.

Finally, I would spend more time to find even uglier wallpaper <g>.

Here’s a photo of the wall, packaged up and ready for shipment to Las Vegas (at the time of this writing, it’s in transit), waiting in my driveway. The box was 55” tall, around 42” wide and 7” thick, but only about 25 lbs. It has ¼” plywood on both faces, as well as narrower pieces along the sides. In between the plywood is 2” thick rigid insulating foam. Finally, the corners are protected with the spongier corner form that came with that box.

It costs a stupid amount of money to ship something like that around, but it’s worth it for events. 🙂

picture7

After this, it’s going to Redmond where I’ll record a video walkthrough with Channel 9 during the second week of November.

What Next?

Windows Remote Wiring made this project quite simple to do. I was able to use the tools and languages I love to use (like Visual Studio and C#), but still get the IO of a device like the Arduino Uno. I was also able to use facilities available to a UWP app, and call into a simple bot of my own design. In addition to all that, I was able to use voice recognition and MIDI all in the same app, in a way that made sense.

The Bot Framework and LUIS stuff was all brand new to me, but was really fun to do. Now that I know how to connect app logic to a bot, there will certainly be more interactive projects in the future.

This was a fun project for me. It’s probably my last real maker project of the fall/winter, as I settle into the fall home renovation work and also gear up for the NAMM music event in January. But luckily, there have been many other posts here about Windows 10 IoT Core and our maker and IoT-focused technology. If this topic is interesting to you, I encourage you to take a spin through the archives and check them out.

Whatever gift-giving and receiving holiday you celebrate this winter, be sure to add a few Raspberry Pi 3 devices and some Arduino Uno boards on your list, because there are few things more enjoyable than cozying up to a microcontroller or some IoT code on a cold winter’s day. Oh, and if you steal a strand or two of lights from the tree, I won’t tell. 🙂

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on Twitter @pete_brown

Most of all, thanks for reading!

Kinect demo code and new driver for UWP now available

$
0
0

Here’s a little memory test: Do you recall this blog, which posted back in May and promised to soon begin integrating Kinect for Windows into the Universal Windows Platform? Of course you do! Now we are pleased to announce two important developments in the quest to make Kinect functionality available to UWP apps.

First, by popular demand, the code that Alex Turner used during his Channel 9 video (above) is now available on GitHub as part of the Windows universal samples. With this sample, you can use Windows.Media.Capture.Frames APIs to enumerate the Kinect sensor’s RGB/IR/depth cameras and then use MediaFrameReader to stream frames. This API lets you access pixels of each individual frame directly in a highly efficient way.

These new functionalities debuted in the Windows 10 Anniversary Update, and structure of the APIs should be familiar to those who’ve been using the Kinect SDK for years. But these new APIs are designed to work not only with the Kinect sensor but with any other sensors capable of delivering rich data streams—provided you have a matching device driver.

Which brings us to our second announcement: We have now enabled the Kinect driver on Windows Update. So if you’d like try out this new functionality now, simply go to the Device Manager and update the driver for the Kinect sensor. In addition to enabling the new UWP APIs described above, the new driver also lets you use the Kinect color camera as a normal webcam. This means that apps which use a webcam, such as Skype, can now employ the Kinect sensor as their source. It also means that you can use the Kinect sensor to enable Windows Hello for authentication via facial recognition.

picture1

Another GitHub sample demonstrates how to use new special-correlation APIs, such as CameraIntrinsics or DepthCorrelatedCoordinateMapper, to process RGB and depth camera frames for background removal. These APIs take advantage of the fact that the Kinect sensor’s color and depth cameras are spatially correlated by calibration and depth frame data. This sample also shows how to access the Kinect sensor’s skeletal tracking data through a custom media stream in UWP apps with newly introduced APIs.

Finally, we should note that the Xbox summary update also enables these Kinect features through Windows.Media.Capture.Frames for UWP apps. Thus, apps that use the Kinect sensor’s RGB, infrared, and/or depth cameras will run on Xbox with same code, and Xbox can also use the Kinect RGB camera as a normal webcam for Skype-like scenarios

Judging from requests, we’re confident that many of you are eager to explore both the demo code and download the new driver. When you do, we want to hear about your experiences—what you liked, what you didn’t, and what enhancements you want to see. So send us your feedback!

Please note that, if you have technical questions about this post or would like to discuss Kinect with other developers and Microsoft engineers, we hope you will join the conversation on the Kinect for Windows v2 SDK forum. You can browse existing topics or ask a new question by clicking the Ask a question button on the forum webpage.

The Kinect for Windows Team

Key links

In Case You Missed it – This Week in Windows Developer

$
0
0

Happy (belated) Halloween, Windows Devs! This past week gave 80s kids, pop culture fans and Windows Devs alike all a chance to celebrate the internet of things.

Our very own IoT master, Pete Brown, created a series on IoT, remote wiring, voice recognition and AI inspired by the Netflix hit, Stranger Things. Check it out below!

2016-10-31_strangerthings

Internet of Stranger Things Part 1

TL;DR – go ahead and binge watch the series before getting started.

Internet of Stranger Things Part 2

Pete Brown builds a wall. But it’s more than that – Pete adds to the Internet of Stranger Things project by constructing a wall that integrates music and UWP MIDI capabilities. Learn how to cue up your very own haunting 80s synth soundtrack with part 2!

Internet of Stranger Things Part 3

The final installment of the series covers voice recognition and intelligence – two things most IoT devices don’t necessarily support. Low and behold, Pete Brown works his IoT magic in this post.

#XboxAppDev – Adding natural inputs

This post gets personal (with input methods). Learn how to add natural, intuitive input methods to your Xbox and UWP apps.

2016-11-04_speechandink

UWP Integrations for Kinect

Grab your demo hat and get ready for the new drivers and integrations now available for Kinect and UWP. Read more in this blog:

Windows 10 Insider Preview Build 14959 for Mobile and PC

Last, but certainly not least, we released a new build for Windows Insiders in the Fast Ring. There are quite a few updates here, most notably the new ‘Universal Update Platform’ which helps streamline updates across your Windows 10 devices.

And that’s the week in Windows Dev! Feel free to tweet us with any questions, comments or suggestions for Pete Brown’s next example of IoT wizardry.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

 

In Case You Missed It – This Week in Windows Developer

$
0
0

While most of the world froze in place to follow the endless stream of U.S. presidential election coverage, we continued to push forward in the world of Windows Developer. And by push forward, we humbly admit that we just kept geeking out over the new Surface Dial and its recently released APIs. (Check out the Surface Dial and more updates from our event here.)

What Devs Need to Know about the Windows 10 Creators Update & New Surface Devices

We recently learned that you can tweak the Surface Dial to be the ultimate debugging tool. Check it out here:

And while the politicians duked it out in the electoral college, one particular MVP found himself in a higher stakes conflict – battling aliens in a mall.

Insider Preview Build 14965

TL;DR – A bunch of updates and improvements across the board. Check out Dona’s post by clicking above.

MVP Summit

And, on a high note, we had a great time hosting our Microsoft MVPs in Redmond this week. Thank you to everyone who attended and helped organize the event. Here’s a quick recap from Day One:

Overall, regardless of what happens politically, there will always be more bugs to squash and even more code to write. So, on that note, have a great weekend; We’ll be right here waiting for you on Monday morning!

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Windows Application Driver for PC integrates with Appium

$
0
0

The most time consuming and expensive part of developing an app is often the testing phase. Manually testing an app on hundreds of different devices is impractical, and existing automation solutions run into a number of platform and tooling limitations. Appium is designed to simplify testing by supporting multiple platforms, and it’s our goal at Microsoft with Windows Application Driver (WinAppDriver) to enable you to use Appium to test Windows apps. The recent release of Appium v1.6 integrates WinAppDriver so developers can run tests targeting Windows 10 PC applications through Appium!

What is WinAppDriver?

WinAppDriver is a modern, standards-based UI test automation service that aligns with the Selenium WebDriver Protocol The Windows Application Driver allows a developer to use the “write a test once, run anywhere” approach. No longer forced to choose a specific test language and runner, developers are granted flexibility and no longer need to rewrite tests for each platform.

What is Selenium/Appium?

Selenium is the industry standard for automated UI testing of websites/browser applications. Selenium works off of the WebDriver protocol, which is an open API for browser automation. Realizing that this same protocol could be leveraged for mobile app UI testing, the Appium project was created, and it extended the WebDriver API to allow for app-specific automation endpoints. WinAppDriver was created in the spirit of the Selenium/Appium projects to conform to the industry standards for UI testing and bring those standards to the Universal Windows Platform.

How it works

screen-shot-2016-11-15-at-5-12-47-pm

With Appium’s integration of WinAppDriver, developers will have full customization of their preferred testing language and test runner as shown in the diagram above—and they can reuse their tests if their app is on iOS and Android. It is only through Appium that developers can have this customization – each UWP developer might prefer a different test script/test runner for their UI tests and because Appium uses the Webdriver protocol, developers can have that flexibility when authoring tests.

What about CodedUI?

The current UI test automation solution for Windows app testing is CodedUI; however, Coded UI only works for apps running on the Windows platform. For developers who write cross-platform apps, this means they have to write tests for each platform they are targeting. Additionally, those developers who write cross-platform apps will have to write custom tests for each platform they are targeting.

With Appium supporting multiple platforms like Android and iOS, Microsoft encourages customers to use Selenium and Appium for Functional UI testing.

How can I get started?

To download Appium with Windows 10 PC support, make sure you have Node version >=6.0 and npm version >=3.5. Then use the following steps:

  1. In your command prompt, run npm install –g appium
  2. Then, run the command appium from an elevated command prompt
    1. Make sure developer mode is on as well
  3. Choose a test runner (Visual Studio, IntelliJ, Sublime Text etc.) and a language to test in (C#, Ruby, Python, etc.)
  4. Create a test targeting a Windows application of your choice.
    1. Set the URL targeting your Appium server, and the appId capability set to the app ID of the app you are testing.
    2. The platformName capability should be set to “Windows” and the deviceName capability set to “WindowsPC” in the test script.
  5. Run your test from the test runner targeting the Appium server URL

Here is a screenshot of what the install process looks like from the command line:

picture1

As part of the install, you should see that WinAppDriver is downloaded and successfully installed:

picture2

Then, just run Appium from the command line:

picture3

Now that the Appium server is running, you can run a test from your choice of test runner pointing to the Appium endpoint. In this example, we’ll use a test targeting the built-in Calculator app on Windows 10.

picture4

The key components (shown in the red boxes) are setting the URL to target the Appium server, as well as setting the app ID, platformName and deviceName as explained in the earlier instructions.
Once you run the test, you should see results in the test runner.

picture5

To see sample tests, check out the sample apps/tests on the WinAppDriver Github page or in the Appium samples repo. For more information about WinAppDriver + Appium, visit Appium’s website or their GitHub, or check out these videos talking about how Appium and UI test automation works.

Panel with Jonathan Lipps from SauceLabs
UI Test Automation for Browsers and Apps Using the WebDriver Standard

Announcing UWP Community Toolkit 1.2

$
0
0

Following our commitment to ship new versions at a fast pace, I’m thrilled to announce the availability of UWP Community Toolkit 1.2. If you want to get a first glance, you can install the UWP Community Toolkit Sample App directly from the Windows Store.

  • The focus of this version was to stabilize current features while adding the most wanted ones that were missing. A full list of features and changes is available in the release notes. Here’s a quick preview:
  • New Helpers. we worked on providing 7 new helpers to help with everyday tasks:
    • BackgroundTaskHelper to help you work with background tasks
    • HttpHelper to help you deal with HTTP requests in a secure and reliable way
    • PrintHelper to help you print XAML controls
    • DispacherHelper to help you work with tasks that need to run on UI thread
    • DeepLinkHelper to simplify the management of your deep links
    • WebViewExtensions to allow you to bind HTML content to your Webview
    • SystemInformation to gather all system information into a single and unique class
  • New Controls
    • We introduced a new control named MasterDetailView that helps developers create master/detail user experiences

controls-masterdetailsview

  • Updates. We updated the following features:
    • ImageCache was improved to provide a more robust cache
    • HeaderedTextBlock and PullToRefreshListView now accept ContentTemplate customization
    • Facebook service now supports paging when requesting data
    • Renamed BladeControl to BladeView. BladeView now also derives from ItemsControl. This will allow for more common convention like data binding and will make the control aligned with SDK naming. To guarantee backward compatibility, we kept the previous control and flagged it as obsolete. This way, developers can still reference the new version and everything will work just fine. A compiler warning will just encourage you to use the new version. The current plan is to keep obsolete classes until next major version and then remove them.

We saw an increasing number of contributions from the community of 48 developers that led to several new features and improvements. We also observed some healthy dialogue about what should be include or not in the toolkit, architecture best practices, and feature prioritization that is going drive even higher quality of the toolkit.

For example, I would like to share the story behind the MasterDetailView. The story began with an issue created on the GitHub repo: “We need a MasterDetailView.” Immediately, the community reacted with  tremendous energy, discussing the implementation details, features wanted and the philosophy behind the control. We even ended up with two different implementations at some point (the community then voted and discussed to define which best fit with the toolkit principles). If you want to understand how a united community can create wonderful code, I encourage you to read this thread.

You can find the roadmap of the next release here.

If you have any feedback or if you’re interested in contributing, see you on GitHub!

Download Visual Studio to get started!

ICYMI – Microsoft Connect, Linux, WIP and a new Insider Preview Build

$
0
0

Just when you thought you’d seen it all at the MVP Summit, we come back with a few exciting announcements from Connect. We want to thank you again for joining us, and if you couldn’t make it this time, continue reading to see what you might’ve missed.

Connect(); 2016

Connect, the annual Visual Studio-centered developer conference, announced the latest version of our favorite IDE, a preview for the new Visual Studio Mac edition, Team Foundation Server 2017 and a preview for Visual Studio Mobile Center. On top of that, we announced our platinum-level partnership with the Linux foundation. We’re thrilled to finally share all of these updates with you – follow the links below to learn more.

UWP Community Toolkit Update 1.2

Our goal with this update was to stabilize current features while adding the most wanted ones that were missing. Check out the blog to see the full list of updates, additions and assorted bells and whistles.

Windows Insider Preview Build 14971

Coming to you in this week’s build: improved reading experience in Microsoft Edge, new opportunities in 3D, PowerShell updates and a whole bunch of PC fixes.

And that’s all! Make sure to tweet us if you have any questions or comments and, as always, see you next week.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.


Windows 10 SDK Preview Build 14965 Released

$
0
0

Today, we released a new Windows 10 Anniversary SDK Preview to be used in conjunction with Windows 10 Insider Preview (Build 14965 or greater). The Preview SDK is a pre-release and cannot be used in a production environment. Please only install the SDK on your test machine. The Preview SDK Build 14965 contains bug fixes and under development changes to the API surface area. If you are working on an application that you need to submit to the store, you should not install the preview.

The Preview SDK can be downloaded from the developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum.  For new feature requests, head over to our Windows Platform UserVoice.

Things to note:

What’s New

Known Issues Windows SDK

  • Wrong GenXBF.DLL
    If you installed a previous Windows SDK flight, either version 14951 or 14931, you may have an incorrect GenXBF.dll installed. Please follow the following steps after installing the Windows 10 SDK Preview build 14965.
  1. Exit Visual Studio
  2. Open an Administrative command prompt
  3. Type the following:

    DEL “c:\Program Files (x86)\Windows Kits\10\bin\x86\genxbf.dll”

    DEL “c:\Program Files (x86)\Windows Kits\10\bin\x64\genxbf.dll”

  1. Run Control Panel
  2. Select Uninstall a Program
  3. Highlight Windows Software Development Kit – Windows 10.0.14965.1000
  4. Click Change
  5. Select Repair
  6. Click Next

Windows SDK setup will restore the missing GenXBF.dlls  with the appropriate version.

  • Visual Studio 2017 fails with HRESULT: 0x80041FE2 when trying to create C++ UWP apps targeting build 14965 SDK

This is a known problem. Here are steps to address this issue in your project file:

  1. Close the project
  2. Open up the project file in notepad or your favorite editor
  3. Add the following to the project file:
      <PropertyGroup><DoBundleInstallationChecks>false</DoBundleInstallationChecks></PropertyGroup>
  4. Reopen the project in Visual Studio

Known Issues Microsoft Emulator

Microsoft Emulator Preview for Windows 10 Mobile (10.0.14965.0) crashes when launching

Impact:

Please note that there is a bug impacting the usage of hardware accelerated graphics in the latest release of the Mobile Emulator. Follow the instructions below to temporarily disable hardware accelerated graphics in the emulator and use the emulator with software rendered graphics (WARP).

NOTE: The following registry setting will impact any and all Microsoft Emulators installed on your machine. You will need to remove this registry setting in order to re-enable hardware accelerated graphics in the emulator.

  1. Create the following registry subkey if it doesn’t exist: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Xde\10.0
  2. Right click the 10.0 folder, point to New, and then click DWORD Value.
  3. Type DisableRemoteFx, and then press Enter.
  4. Double-click DisableRemoteFx, enter 1 in the Value data box, select the Decimal option, and then click OK.

API Updates and Additions

The following API changes are under development and new or updated for this release of the SDK.

namespace Windows.ApplicationModel.Preview.Notes {
  public sealed class NotesWindowManagerPreview {
    void SetFocusToPreviousView();
    IAsyncAction SetThumbnailImageForTaskSwitcherAsync(SoftwareBitmap bitmap);
    void ShowNoteRelativeTo(int noteViewId, int anchorNoteViewId, NotesWindowManagerPreviewShowNoteOptions options);
    void ShowNoteWithPlacement(int noteViewId, IBuffer data, NotesWindowManagerPreviewShowNoteOptions options);
  }
  public sealed class NotesWindowManagerPreviewShowNoteOptions
}
 
namespace Windows.Devices.Gpio {
  public sealed class GpioInterruptBuffer
  public struct GpioInterruptEvent
  public enum GpioOpenStatus {
    MuxingConflict = 3,
    UnknownError = 4,
  }
  public sealed class GpioPin : IClosable {
    GpioInterruptBuffer InterruptBuffer { get; }
    ulong InterruptCount { get; }
    void CreateInterruptBuffer();
    void CreateInterruptBuffer(int minimumCapacity);
    void StartInterruptBuffer();
    void StartInterruptBuffer(GpioPinEdge edge);
    void StartInterruptCount();
    void StartInterruptCount(GpioPinEdge edge);
    void StopInterruptBuffer();
    void StopInterruptCount();
  }
}
namespace Windows.Devices.Gpio.Provider {
  public interface IGpioInterruptBufferProvider
  public interface IGpioPinProvider2
  public struct ProviderGpioInterruptEvent
}
namespace Windows.Devices.I2c {
  public enum I2cTransferStatus {
    ClockStretchTimeout = 3,
    UnknownError = 4,
  }
}
 
namespace Windows.ApplicationModel {
  public sealed class Package {
    IAsyncOperation<PackageContentGroup> GetContentGroupAsync(string name);
    IAsyncOperation<IVector<PackageContentGroup>> GetContentGroupsAsync();
    IAsyncOperation<bool> SetInUseAsync(bool inUse);
    IAsyncOperation<IVector<PackageContentGroup>> StageContentGroupsAsync(IIterable<string> names);
    IAsyncOperation<IVector<PackageContentGroup>> StageContentGroupsAsync(IIterable<string> names, bool moveToHeadOfQueue);
  }
  public sealed class PackageCatalog {
    event TypedEventHandler<PackageCatalog, PackageContentGroupStagingEventArgs> PackageContentGroupStaging;
    IAsyncOperation<Package> AddOptionalPackageAsync(string optionalPackageFamilyName);
  }
  public sealed class PackageContentGroup
  public sealed class PackageContentGroupStagingEventArgs
  public enum PackageContentGroupState
}
namespace Windows.ApplicationModel.Activation {
  public enum ActivationKind {
    ContactPanel = 1017,
    LockScreenComponent = 1016,
  }
  public sealed class ContactPanelActivatedEventArgs : IActivatedEventArgs, IActivatedEventArgsWithUser, IContactPanelActivatedEventArgs
  public interface IContactPanelActivatedEventArgs
  public sealed class LockScreenComponentActivatedEventArgs : IActivatedEventArgs
  public sealed class ToastNotificationActivatedEventArgs : IActivatedEventArgs, IActivatedEventArgsWithUser, IApplicationViewActivatedEventArgs, IToastNotificationActivatedEventArgs {
    int CurrentlyShownApplicationViewId { get; }
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class GattCharacteristicNotificationTrigger : IBackgroundTrigger {
    public GattCharacteristicNotificationTrigger(GattCharacteristic characteristic, BluetoothEventTriggeringMode eventTriggeringMode);
    BluetoothEventTriggeringMode EventTriggeringMode { get; }
  }
  public sealed class GattServiceProviderTrigger : IBackgroundTrigger
}
namespace Windows.ApplicationModel.Contacts {
  public sealed class ContactAnnotation {
    string ContactGroupId { get; set; }
    string ContactListId { get; set; }
  }
  public enum ContactAnnotationOperations : uint {
    Share = (uint)32,
  }
  public sealed class ContactAnnotationStore {
    IAsyncOperation<IVectorView<ContactAnnotation>> FindAnnotationsForContactGroupAsync(string contactGroupId);
    IAsyncOperation<IVectorView<ContactAnnotation>> FindAnnotationsForContactListAsync(string contactListId);
  }
  public sealed class ContactGroup
  public sealed class ContactGroupMember
  public sealed class ContactGroupMemberBatch
  public sealed class ContactGroupMemberReader
  public enum ContactGroupOtherAppReadAccess
  public static class ContactManager {
    public static IAsyncOperation<bool> IsShowFullContactCardSupportedAsync();
  }
  public sealed class ContactManagerForUser {
    void ShowFullContactCard(Contact contact, FullContactCardOptions fullContactCardOptions);
  }
  public sealed class ContactPanel
  public sealed class ContactPanelClosingEventArgs
  public sealed class ContactPanelLaunchFullAppRequestedEventArgs
  public sealed class ContactPicker {
    User User { get; }
    public static ContactPicker CreateForUser(User user);
    public static IAsyncOperation<bool> IsSupportedAsync();
  }
  public sealed class ContactStore {
    IAsyncOperation<ContactGroup> CreateContactGroupAsync(string displayName);
    IAsyncOperation<ContactGroup> CreateContactGroupAsync(string displayName, string userDataAccountId);
    IAsyncOperation<IVectorView<ContactGroup>> FindContactGroupsAsync();
    IAsyncOperation<IVectorView<ContactGroup>> FindContactGroupsByRemoteIdAsync(string remoteId);
    IAsyncOperation<ContactGroup> GetContactGroupAsync(string contactGroupId);
  }
  public sealed class PinnedContactIdsQueryResult
  public sealed class PinnedContactManager
  public enum PinnedContactSurface
}
namespace Windows.ApplicationModel.Core {
  public sealed class CoreApplicationView {
    IPropertySet Properties { get; }
  }
}
namespace Windows.ApplicationModel.DataTransfer {
  public sealed class DataTransferManager {
    public static void ShowShareUI(ShareUIOptions shareOptions);
  }
  public sealed class ShareUIOptions
}
namespace Windows.ApplicationModel.Email {
  public sealed class EmailMessage {
    IVector<EmailRecipient> ReplyTo { get; }
    EmailRecipient SentRepresenting { get; set; }
  }
}
namespace Windows.ApplicationModel.Store.LicenseManagement {
  public static class LicenseManager {
    public static IAsyncAction RefreshLicensesAsync(LicenseRefreshOption refreshOption);
  }
  public enum LicenseRefreshOption
}
namespace Windows.ApplicationModel.UserDataAccounts {
  public sealed class UserDataAccount {
    bool CanShowCreateContactGroup { get; set; }
    bool IsProtectedUnderLock { get; set; }
    IPropertySet ProviderProperties { get; }
    IAsyncOperation<IVectorView<ContactGroup>> FindContactGroupsAsync();
    IAsyncOperation<IVectorView<UserDataTaskList>> FindUserDataTaskListsAsync();
    IAsyncOperation<string> TryShowCreateContactGroupAsync();
  }
  public sealed class UserDataAccountStore {
    IAsyncOperation<UserDataAccount> CreateAccountAsync(string userDisplayName, string packageRelativeAppId, string enterpriseId);
  }
}
namespace Windows.ApplicationModel.UserDataTasks {
  public sealed class UserDataTask
  public sealed class UserDataTaskBatch
  public enum UserDataTaskDaysOfWeek : uint
  public enum UserDataTaskDetailsKind
  public enum UserDataTaskKind
  public sealed class UserDataTaskList
  public sealed class UserDataTaskListLimitedWriteOperations
  public enum UserDataTaskListOtherAppReadAccess
  public enum UserDataTaskListOtherAppWriteAccess
  public sealed class UserDataTaskListSyncManager
  public enum UserDataTaskListSyncStatus
  public static class UserDataTaskManager
  public sealed class UserDataTaskManagerForUser
  public enum UserDataTaskPriority
  public enum UserDataTaskQueryKind
  public sealed class UserDataTaskQueryOptions
  public enum UserDataTaskQuerySortProperty
  public sealed class UserDataTaskReader
  public sealed class UserDataTaskRecurrenceProperties
  public enum UserDataTaskRecurrenceUnit
  public sealed class UserDataTaskRegenerationProperties
  public enum UserDataTaskRegenerationUnit
  public enum UserDataTaskSensitivity
  public sealed class UserDataTaskStore
  public enum UserDataTaskStoreAccessType
  public enum UserDataTaskWeekOfMonth
}
namespace Windows.ApplicationModel.UserDataTasks.DataProvider {
  public sealed class UserDataTaskDataProviderConnection
  public sealed class UserDataTaskDataProviderTriggerDetails
  public sealed class UserDataTaskListCompleteTaskRequest
  public sealed class UserDataTaskListCompleteTaskRequestEventArgs
  public sealed class UserDataTaskListCreateOrUpdateTaskRequest
  public sealed class UserDataTaskListCreateOrUpdateTaskRequestEventArgs
  public sealed class UserDataTaskListDeleteTaskRequest
  public sealed class UserDataTaskListDeleteTaskRequestEventArgs
  public sealed class UserDataTaskListSkipOccurrenceRequest
  public sealed class UserDataTaskListSkipOccurrenceRequestEventArgs
  public sealed class UserDataTaskListSyncManagerSyncRequest
  public sealed class UserDataTaskListSyncManagerSyncRequestEventArgs
}
namespace Windows.Gaming.Input {
  public sealed class FlightStick : IGameController
  public enum FlightStickButtons : uint
  public struct FlightStickReading
  public enum GameControllerSwitchKind
  public enum GameControllerSwitchPosition
  public sealed class RawGameController : IGameController
}
namespace Windows.Gaming.Input.Custom {
  public sealed class HidGameControllerProvider : IGameControllerProvider
  public interface IHidGameControllerInputSink : IGameControllerInputSink
}
namespace Windows.Graphics.Printing.PrintTicket {
  public interface IPrintTicketSchemaDisplayableElement : IPrintTicketSchemaElement
  public interface IPrintTicketSchemaElement
  public interface IPrintTicketSchemaOption : IPrintTicketSchemaDisplayableElement, IPrintTicketSchemaElement
  public interface IPrintTicketSchemaParameterDefinition : IPrintTicketSchemaElement
  public interface IPrintTicketSchemaValue
  public sealed class PrintTicketSchemaCapabilities : IPrintTicketSchemaElement
  public sealed class PrintTicketSchemaFeature : IPrintTicketSchemaDisplayableElement, IPrintTicketSchemaElement
  public sealed class PrintTicketSchemaParameterInitializer : IPrintTicketSchemaElement
  public enum tagSchemaParameterDataType
  public enum tagSchemaSelectionType
  public enum tagValueType
  public sealed class WorkflowPrintSchemaTicket : IPrintTicketSchemaElement
  public sealed class XmlNode
}
namespace Windows.Graphics.Printing.Workflow {
  public interface IPrinterPropertyBag
  public sealed class PrinterQueue
  public sealed class PrintTaskBackgroundSessionManager
  public sealed class PrintTaskConfig
  public sealed class PrintTaskForegroundSessionManager
  public sealed class PrintTaskSessionState
  public enum PrintTaskSessionStatus
  public sealed class PrintTaskSetupEventArgs
  public sealed class PrintTaskSubmissionController
  public sealed class PrintTaskSubmittedEventArgs
  public sealed class PrintTaskTarget
  public sealed class PrintTaskUIActivatedEventArgs : IActivatedEventArgs
  public sealed class PrintTaskXpsDataAvailableEventArgs
  public sealed class SourceContent
  public sealed class SpoolStreamContent
  public sealed class StreamTarget
  public sealed class WorkflowTaskContext
  public sealed class WorkflowTriggerDetails
  public sealed class XpsOmContent
  public sealed class XpsOmReceiver
}
namespace Windows.Management.Deployment {
  public enum DeploymentOptions : uint {
    EnableStreamedInstall = (uint)128,
    RequiredContentGroupOnly = (uint)256,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageAsync(Uri packageUri, IIterable<Uri> dependencyPackageUris, DeploymentOptions deploymentOptions, PackageVolume targetVolume, IIterable<string> optionalPackageFamilyNames, IIterable<Uri> externalPackageUris);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByFamilyNameAsync(string mainPackageFamilyName, IIterable<string> dependencyPackageFamilyNames, DeploymentOptions deploymentOptions, PackageVolume appDataVolume, IIterable<string> optionalPackageFamilyNames);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageAsync(Uri packageUri, IIterable<Uri> dependencyPackageUris, DeploymentOptions deploymentOptions, PackageVolume targetVolume, IIterable<string> optionalPackageFamilyNames, IIterable<Uri> externalPackageUris);
  }
}
namespace Windows.Management.Policies {
  public sealed class BinaryPolicy
  public sealed class BooleanPolicy
  public static class BrowserPolicies
  public sealed class BrowserPoliciesForUser
  public sealed class Int32Policy
  public sealed class StringPolicy
}
namespace Windows.Media {
  public sealed class MediaExtensionManager {
    void RegisterMediaExtensionForAppService(IMediaExtension extension, AppServiceConnection connection);
  }
  public sealed class MediaMarkerSpeechSentenceBoundary : IMediaMarker
  public sealed class MediaMarkerSpeechWordBoundary : IMediaMarker
  public static class MediaMarkerTypes {
    public static string SentenceBoundary { get; }
    public static string WordBoundary { get; }
  }
  public struct MediaTimeRange
}
namespace Windows.Media.Capture {
  public sealed class MediaCaptureInitializationSettings {
    bool AlwaysPlaySystemShutterSound { get; set; }
  }
}
namespace Windows.Media.Core {
  public sealed class ChapterCue : IMediaCue
  public sealed class DataCue : IMediaCue {
    PropertySet Properties { get; }
  }
  public sealed class ImageCue : IMediaCue
  public sealed class MediaBindingEventArgs {
    void SetAdaptiveMediaSource(AdaptiveMediaSource mediaSource);
    void SetStorageFile(IStorageFile file);
  }
  public sealed class MediaSource : IClosable, IMediaPlaybackSource {
    AdaptiveMediaSource AdaptiveMediaSource { get; }
    MediaStreamSource MediaStreamSource { get; }
    MseStreamSource MseStreamSource { get; }
    Uri Uri { get; }
  }
  public sealed class MediaStreamSource : IMediaSource {
    IReference<double> MaxSupportedPlaybackRate { get; set; }
  }
  public enum TimedMetadataKind {
    ImageSubtitle = 6,
  }
  public enum TimedTextFontStyle
  public sealed class TimedTextSource {
    public static TimedTextSource CreateFromStreamWithIndex(IRandomAccessStream stream, IRandomAccessStream indexStream);
    public static TimedTextSource CreateFromStreamWithIndex(IRandomAccessStream stream, IRandomAccessStream indexStream, string defaultLanguage);
    public static TimedTextSource CreateFromUriWithIndex(Uri uri, Uri indexUri);
    public static TimedTextSource CreateFromUriWithIndex(Uri uri, Uri indexUri, string defaultLanguage);
  }
  public sealed class TimedTextStyle {
    TimedTextFontStyle FontStyle { get; set; }
    bool IsLineThroughEnabled { get; set; }
    bool IsOverlineEnabled { get; set; }
    bool IsUnderlineEnabled { get; set; }
  }
}
namespace Windows.Media.Core.Preview {
  public static class SoundLevelBroker
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string D16 { get; }
    public static string L16 { get; }
    public static string L8 { get; }
    public static string Vp9 { get; }
  }
  public enum SphericalVideoFrameFormat
  public sealed class VideoEncodingProperties : IMediaEncodingProperties {
    SphericalVideoFrameFormat SphericalVideoFrameFormat { get; }
  }
}
namespace Windows.Media.Playback {
  public enum AutoLoadedDisplayPropertyKind
  public sealed class CurrentMediaPlaybackItemChangedEventArgs {
    MediaPlaybackItemChangedReason Reason { get; }
  }
  public sealed class MediaPlaybackItem : IMediaPlaybackSource {
    AutoLoadedDisplayPropertyKind AutoLoadedDisplayProperties { get; set; }
    bool IsDisabledInPlaybackList { get; set; }
    double TotalDownloadProgress { get; }
  }
  public enum MediaPlaybackItemChangedReason
  public sealed class MediaPlaybackList : IMediaPlaybackSource {
    IReference<uint> MaxPlayedItemsToKeepOpen { get; set; }
  }
  public sealed class MediaPlaybackSession {
    bool IsMirroring { get; set; }
    MediaPlaybackSphericalVideoProjection SphericalVideoProjection { get; }
    event TypedEventHandler<MediaPlaybackSession, object> BufferedRangesChanged;
    event TypedEventHandler<MediaPlaybackSession, object> PlayedRangesChanged;
    event TypedEventHandler<MediaPlaybackSession, object> SeekableRangesChanged;
    event TypedEventHandler<MediaPlaybackSession, object> SupportedPlaybackRatesChanged;
    IVectorView<MediaTimeRange> GetBufferedRanges();
    IVectorView<MediaTimeRange> GetPlayedRanges();
    IVectorView<MediaTimeRange> GetSeekableRanges();
    bool IsSupportedPlaybackRateRange(double rate1, double rate2);
  }
  public sealed class MediaPlaybackSphericalVideoProjection
}
namespace Windows.Media.Protection.PlayReady {
  public interface IPlayReadyLicenseSession2 : IPlayReadyLicenseSession
  public sealed class PlayReadyLicense : IPlayReadyLicense {
    bool ExpiresInRealTime { get; }
    bool InMemoryOnly { get; }
    Guid SecureStopId { get; }
    uint SecurityLevel { get; }
  }
  public sealed class PlayReadyLicenseAcquisitionServiceRequest : IMediaProtectionServiceRequest, IPlayReadyLicenseAcquisitionServiceRequest, IPlayReadyServiceRequest {
    PlayReadyLicenseIterable CreateLicenseIterable(PlayReadyContentHeader contentHeader, bool fullyEvaluated);
  }
  public sealed class PlayReadyLicenseSession : IPlayReadyLicenseSession, IPlayReadyLicenseSession2 {
    PlayReadyLicenseIterable CreateLicenseIterable(PlayReadyContentHeader contentHeader, bool fullyEvaluated);
  }
}
namespace Windows.Media.SpeechSynthesis {
  public sealed class SpeechSynthesisOptions
  public sealed class SpeechSynthesizer : IClosable {
    SpeechSynthesisOptions Options { get; }
  }
}
namespace Windows.Media.Streaming.Adaptive {
  public sealed class AdaptiveMediaSource : IClosable, IMediaSource {
    IReference<TimeSpan> DesiredSeekableWindowSize { get; set; }
    AdaptiveMediaSourceDiagnostics Diagnostics { get; }
    IReference<TimeSpan> MaxSeekableWindowSize { get; }
    IReference<TimeSpan> MinLiveOffset { get; }
    void Close();
    AdaptiveMediaSourceCorrelatedTimes GetCorrelatedTimes();
  }
  public sealed class AdaptiveMediaSourceCorrelatedTimes
  public sealed class AdaptiveMediaSourceDiagnosticAvailableEventArgs
  public sealed class AdaptiveMediaSourceDiagnostics
  public enum AdaptiveMediaSourceDiagnosticType
  public sealed class AdaptiveMediaSourceDownloadBitrateChangedEventArgs {
    AdaptiveMediaSourceDownloadBitrateChangedReason Reason { get; }
  }
  public enum AdaptiveMediaSourceDownloadBitrateChangedReason
}
namespace Windows.Networking.NetworkOperators {
  public sealed class MobileBroadbandAccount {
    Uri AccountExperienceUrl { get; }
  }
  public sealed class MobileBroadbandDeviceInformation {
    string SimGid1 { get; }
    string SimPnn { get; }
    string SimSpn { get; }
  }
}
namespace Windows.Payments {
  public interface IPaymentItem
  public sealed class PaymentAddress
  public static class PaymentAppRegistration
  public sealed class PaymentCurrencyAmount
  public sealed class PaymentDetails
  public sealed class PaymentDetailsModifier
  public sealed class PaymentItem : IPaymentItem
  public static class PaymentMediator
  public sealed class PaymentMerchantInfo
  public sealed class PaymentMethodData
  public enum PaymentOptionPresence
  public sealed class PaymentOptions
  public sealed class PaymentRequest
  public sealed class PaymentRequestChangedEventArgs
  public delegate IAsyncOperation<PaymentRequestChangedEventResult> PaymentRequestChangedEventHandler(PaymentRequest paymentRequest, PaymentRequestChangedEventArgs args);
  public sealed class PaymentRequestChangedEventResult
  public enum PaymentRequestChangeSource
  public enum PaymentRequestCompletionStatus
  public enum PaymentRequestStatus
  public sealed class PaymentRequestSubmitResult
  public sealed class PaymentResponse
  public sealed class PaymentShippingOption : IPaymentItem
  public sealed class PaymentToken
  public sealed class PaymentTransaction
  public sealed class PaymentTransactionAcceptResult
}
namespace Windows.Perception.Spatial.Preview {
  public interface ISpatialAnchorStorage
  public sealed class SpatialAnchorMetadata
  public enum SpatialAnchorStorageContentChange
  public sealed class SpatialAnchorStorageContentChangedEventArgs
  public sealed class SpatialElement
  public sealed class SpatialElementChangedEventArgs
  public sealed class SpatialElementStore
}
namespace Windows.Perception.Spatial.Preview.Sharing {
  public interface ISpatialSharingSession
  public interface ISpatialSharingSessionHost
  public interface ISpatialSharingSessionManager
  public sealed class SessionChangedEventArgs
  public sealed class SessionInviteReceivedEventArgs
  public sealed class SessionMessageReceivedEventArgs
  public sealed class SessionParticipantEventArgs
  public sealed class SessionParticipantLeftEventArgs
  public sealed class SpatialSharingDevice
  public sealed class SpatialSharingQueryResult
  public sealed class SpatialSharingSession : ISpatialAnchorStorage, ISpatialSharingSession
  public sealed class SpatialSharingSessionHost : ISpatialSharingSessionHost
  public sealed class SpatialSharingSessionInvite
  public sealed class SpatialSharingSessionManager : ISpatialSharingSessionManager
  public sealed class SpatialSharingSessionParticipant
  public enum SpatialSharingSessionState
  public sealed class SpatialSharingSessionToken
}
namespace Windows.Security.Cryptography.Certificates {
  public sealed class CertificateExtension
  public sealed class CertificateRequestProperties {
    IVector<CertificateExtension> Extensions { get; }
    SubjectAlternativeNameInfo SubjectAlternativeName { get; }
    IVector<string> SuppressedDefaults { get; }
  }
  public sealed class SubjectAlternativeNameInfo {
    IVector<string> DistinguishedNames { get; }
    IVector<string> DnsNames { get; }
    IVector<string> EmailNames { get; }
    CertificateExtension Extension { get; }
    IVector<string> IPAddresses { get; }
    IVector<string> PrincipalNames { get; }
    IVector<string> Urls { get; }
  }
}
namespace Windows.Services.Cortana {
  public enum CortanaPermission
  public enum CortanaPermissionsChangeResult
  public sealed class CortanaPermissionsManager
}
namespace Windows.Services.Maps {
  public sealed class EnhancedWaypoint
  public static class MapRouteFinder {
    public static IAsyncOperation<MapRouteFinderResult> GetDrivingRouteFromEnhancedWaypointsAsync(IIterable<EnhancedWaypoint> waypoints);
    public static IAsyncOperation<MapRouteFinderResult> GetDrivingRouteFromEnhancedWaypointsAsync(IIterable<EnhancedWaypoint> waypoints, MapRouteDrivingOptions options);
  }
  public static class MapService {
    public static MapServiceDataUsagePreference DataUsagePreference { get; set; }
  }
  public enum MapServiceDataUsagePreference
  public enum WaypointKind
}
namespace Windows.Services.Maps.OfflineMaps {
  public sealed class OfflineMapPackage
  public sealed class OfflineMapPackageQueryResult
  public enum OfflineMapPackageQueryStatus
  public sealed class OfflineMapPackageStartDownloadResult
  public enum OfflineMapPackageStartDownloadStatus
  public enum OfflineMapPackageStatus
}
namespace Windows.System {
  public sealed class DispatcherQueue
  public delegate void DispatcherQueueHandler();
  public delegate IAsyncAction DispatcherQueueHandlerAsync();
  public sealed class DispatcherQueueOptions
  public enum DispatcherQueuePriority
  public sealed class DispatcherQueueTimer
}
namespace Windows.System.Preview.RemoteSessions {
  public enum BinaryChannelTransportMode
  public sealed class RemoteSession
  public sealed class RemoteSessionAddedEventArgs
  public sealed class RemoteSessionBinaryChannel
  public sealed class RemoteSessionBinaryMessageReceivedEventArgs
  public enum RemoteSessionConnectionStatus
  public sealed class RemoteSessionConnectResult
  public sealed class RemoteSessionDisconnectedEventArgs
  public enum RemoteSessionDisconnectedReason
  public sealed class RemoteSessionInfo
  public sealed class RemoteSessionInvitationManager
  public sealed class RemoteSessionInvitationReceivedEventArgs
  public sealed class RemoteSessionJoinRequest
  public sealed class RemoteSessionJoinRequestedEventArgs
  public sealed class RemoteSessionParticipant
  public sealed class RemoteSessionParticipantChangedEventArgs
  public sealed class RemoteSessionRemovedEventArgs
  public sealed class RemoteSessionUpdatedEventArgs
  public sealed class RemoteSessionWatcher
}
namespace Windows.System.Profile {
  public static class EducationSettings
}
namespace Windows.System.RemoteSystems {
  public sealed class RemoteSystem {
    IAsyncOperation<bool> GetResourceAvailableAsync(string query);
  }
}
namespace Windows.System.RemoteSystems.Preview {
  public static class RemoteSystemResourceQuery
}
namespace Windows.UI.Composition {
  public class CompositionDrawingSurface : CompositionObject, ICompositionSurface {
  }
  public sealed class CompositionGraphicsDevice : CompositionObject {
    CompositionVirtualDrawingSurface CreateVirtualDrawingSurface(Size sizePixels, DirectXPixelFormat pixelFormat, DirectXAlphaMode alphaMode);
  }
  public sealed class CompositionVirtualDrawingSurface : CompositionDrawingSurface, ICompositionSurface
  public sealed class CompositionVisualSurface : CompositionObject, ICompositionSurface
  public sealed class CompositionWindowBackdropBrush : CompositionBrush
  public sealed class Compositor : IClosable {
    CompositionVisualSurface CreateVisualSurface();
    CompositionWindowBackdropBrush CreateWindowBackdropBrush();
  }
  public sealed class LayerVisual : ContainerVisual {
    CompositionShadow Shadow { get; set; }
  }
  public class Visual : CompositionObject {
    Vector3 RelativeOffset { get; set; }
    Vector2 RelativeSize { get; set; }
    Visual TransformParent { get; set; }
  }
}
namespace Windows.UI.Core {
  public sealed class CoreWindow : ICorePointerRedirector, ICoreWindow {
    event TypedEventHandler<CoreWindow, object> ResizeCompleted;
    event TypedEventHandler<CoreWindow, object> ResizeStarted;
  }
}
namespace Windows.UI.Input {
  public static class KnownSimpleHapticsControllerWaveforms
  public sealed class RadialController {
    event TypedEventHandler<RadialController, RadialControllerButtonHoldingEventArgs> ButtonHolding;
    event TypedEventHandler<RadialController, RadialControllerButtonPressedEventArgs> ButtonPressed;
    event TypedEventHandler<RadialController, RadialControllerButtonReleasedEventArgs> ButtonReleased;
  }
  public sealed class RadialControllerButtonClickedEventArgs {
    SimpleHapticsController SimpleHapticsController { get; }
  }
  public sealed class RadialControllerButtonHoldingEventArgs
  public sealed class RadialControllerButtonPressedEventArgs
  public sealed class RadialControllerButtonReleasedEventArgs
  public sealed class RadialControllerConfiguration {
    RadialController ActiveControllerWhenMenuIsSuppressed { get; set; }
    bool IsMenuSuppressed { get; set; }
  }
  public sealed class RadialControllerControlAcquiredEventArgs {
    bool IsButtonPressed { get; }
    SimpleHapticsController SimpleHapticsController { get; }
  }
  public sealed class RadialControllerMenuItem {
    public static RadialControllerMenuItem CreateFromFontGlyph(string displayText, string glyph, string fontFamily);
    public static RadialControllerMenuItem CreateFromFontGlyph(string displayText, string glyph, string fontFamily, Uri fontUri);
  }
  public sealed class RadialControllerRotationChangedEventArgs {
    bool IsButtonPressed { get; }
    SimpleHapticsController SimpleHapticsController { get; }
  }
  public sealed class RadialControllerScreenContactContinuedEventArgs {
    bool IsButtonPressed { get; }
    SimpleHapticsController SimpleHapticsController { get; }
  }
  public sealed class RadialControllerScreenContactEndedEventArgs
  public sealed class RadialControllerScreenContactStartedEventArgs {
    bool IsButtonPressed { get; }
    SimpleHapticsController SimpleHapticsController { get; }
  }
  public sealed class SimpleHapticsController
  public sealed class SimpleHapticsControllerFeedback
}
namespace Windows.UI.Input.Core {
  public sealed class RadialControllerIndependentInputSource
}
namespace Windows.UI.Input.Inking {
  public enum InkPersistenceFormat
  public sealed class InkPresenterProtractor : IInkPresenterStencil
  public sealed class InkPresenterRuler : IInkPresenterStencil {
    bool AreTickMarksVisible { get; set; }
    bool IsCompassVisible { get; set; }
  }
  public enum InkPresenterStencilKind {
    Protractor = 2,
  }
  public sealed class InkStroke {
    uint Id { get; }
    IReference<TimeSpan> StrokeDuration { get; set; }
    IReference<DateTime> StrokeStartedTime { get; set; }
  }
  public sealed class InkStrokeBuilder {
    InkStroke CreateStrokeFromInkPoints(IIterable<InkPoint> inkPoints, Matrix3x2 transform, IReference<DateTime> strokeStartedTime, IReference<TimeSpan> strokeDuration);
  }
  public sealed class InkStrokeContainer : IInkStrokeContainer {
    InkStroke GetStrokeById(uint id);
    IAsyncOperationWithProgress<uint, uint> SaveAsync(IOutputStream outputStream, InkPersistenceFormat inkPersistenceFormat);
  }
}
namespace Windows.UI.Input.Spatial {
  public sealed class SpatialHoldCompletedEventArgs {
    SpatialPointingPose TryGetPointingPose(SpatialCoordinateSystem coordinateSystem);
  }
  public sealed class SpatialHoldStartedEventArgs {
    SpatialPointingPose TryGetPointingPose(SpatialCoordinateSystem coordinateSystem);
  }
  public sealed class SpatialInteractionDetectedEventArgs {
    SpatialPointingPose TryGetPointingPose(SpatialCoordinateSystem coordinateSystem);
  }
  public enum SpatialInteractionKind
  public sealed class SpatialInteractionSource {
    bool SupportsPointing { get; }
  }
  public sealed class SpatialInteractionSourceEventArgs {
    SpatialInteractionKind InteractionKind { get; }
    SpatialPointingPose TryGetPointingPose(SpatialCoordinateSystem coordinateSystem);
  }
  public sealed class SpatialInteractionSourceState {
    bool IsGrasped { get; }
    bool IsPrimaryPressed { get; }
    bool IsSecondaryPressed { get; }
    SpatialPointingPose TryGetPointingPose(SpatialCoordinateSystem coordinateSystem);
  }
  public sealed class SpatialPointerPose {
    SpatialPointingPose TryGetPointingPose(SpatialInteractionSource source);
  }
  public sealed class SpatialPointingPose
  public sealed class SpatialTappedEventArgs {
    SpatialPointingPose TryGetPointingPose(SpatialCoordinateSystem coordinateSystem);
  }
}
namespace Windows.UI.Notifications {
  public sealed class NotificationData
  public enum NotificationUpdateResult
  public sealed class ToastCollection
  public sealed class ToastCollectionManager
  public sealed class ToastNotification {
    NotificationData Data { get; set; }
  }
  public sealed class ToastNotificationHistoryChangedTriggerDetail {
    string CollectionId { get; }
  }
  public static class ToastNotificationManager {
    public static ToastNotificationManagerForUser Current { get; }
  }
  public sealed class ToastNotificationManagerForUser {
    IAsyncOperation<ToastNotificationHistory> GetHistoryForToastCollectionIdAsync(string collectionId);
    ToastCollectionManager GetToastCollectionManager();
    ToastCollectionManager GetToastCollectionManager(string appId);
    IAsyncOperation<ToastNotifier> GetToastNotifierForToastCollectionIdAsync(string collectionId);
  }
  public sealed class ToastNotifier {
    NotificationUpdateResult Update(NotificationData data, string tag);
    NotificationUpdateResult Update(NotificationData data, string tag, string group);
  }
}
namespace Windows.UI.Text {
  public enum TextDecorations : uint
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    IAsyncOperation<bool> TryConsolidateAsync();
  }
  public sealed class ApplicationViewConsolidatedEventArgs {
    bool IsAppInitiated { get; }
  }
}
namespace Windows.UI.WebUI {
  public sealed class WebUIContactPanelActivatedEventArgs : IActivatedEventArgs, IActivatedEventArgsDeferral, IActivatedEventArgsWithUser, IContactPanelActivatedEventArgs
  public sealed class WebUILockScreenComponentActivatedEventArgs : IActivatedEventArgs, IActivatedEventArgsDeferral
}
namespace Windows.UI.Xaml {
  public sealed class BringIntoViewOptions
  public class FrameworkElement : UIElement {
    public static void DeferTree(DependencyObject element);
  }
  public class UIElement : DependencyObject {
    double KeyTipHorizontalOffset { get; set; }
    public static DependencyProperty KeyTipHorizontalOffsetProperty { get; }
    KeyTipPlacementMode KeyTipPlacementMode { get; set; }
    public static DependencyProperty KeyTipPlacementModeProperty { get; }
    double KeyTipVerticalOffset { get; set; }
    public static DependencyProperty KeyTipVerticalOffsetProperty { get; }
    XYFocusKeyboardNavigationMode XYFocusKeyboardNavigation { get; set; }
    public static DependencyProperty XYFocusKeyboardNavigationProperty { get; }
    void StartBringIntoView();
    void StartBringIntoView(BringIntoViewOptions options);
  }
}
namespace Windows.UI.Xaml.Automation {
  public sealed class AutomationElementIdentifiers {
    public static AutomationProperty CultureProperty { get; }
  }
  public sealed class AutomationProperties {
    public static DependencyProperty CultureProperty { get; }
    public static int GetCulture(DependencyObject element);
    public static void SetCulture(DependencyObject element, int value);
  }
}
namespace Windows.UI.Xaml.Automation.Peers {
  public class AutomationPeer : DependencyObject {
    int GetCulture();
    virtual int GetCultureCore();
  }
  public sealed class MapControlAutomationPeer : FrameworkElementAutomationPeer, IScrollProvider, ITransformProvider, ITransformProvider2 {
    bool CanMove { get; }
    bool CanResize { get; }
    bool CanRotate { get; }
    bool CanZoom { get; }
    double MaxZoom { get; }
    double MinZoom { get; }
    double ZoomLevel { get; }
    void Move(double x, double y);
    void Resize(double width, double height);
    void Rotate(double degrees);
    void Zoom(double zoom);
    void ZoomByUnit(ZoomUnit zoomUnit);
  }
}
namespace Windows.UI.Xaml.Controls {
  public class ContentDialog : ContentControl {
    bool IsTertiaryButtonEnabled { get; set; }
    public static DependencyProperty IsTertiaryButtonEnabledProperty { get; }
    Style PrimaryButtonStyle { get; set; }
    public static DependencyProperty PrimaryButtonStyleProperty { get; }
    Style SecondaryButtonStyle { get; set; }
    public static DependencyProperty SecondaryButtonStyleProperty { get; }
    ICommand TertiaryButtonCommand { get; set; }
    object TertiaryButtonCommandParameter { get; set; }
    public static DependencyProperty TertiaryButtonCommandParameterProperty { get; }
    public static DependencyProperty TertiaryButtonCommandProperty { get; }
    Style TertiaryButtonStyle { get; set; }
    public static DependencyProperty TertiaryButtonStyleProperty { get; }
    string TertiaryButtonText { get; set; }
    public static DependencyProperty TertiaryButtonTextProperty { get; }
    event TypedEventHandler<ContentDialog, ContentDialogButtonClickEventArgs> TertiaryButtonClick;
  }
  public enum ContentDialogResult {
    Tertiary = 3,
  }
  public class Control : FrameworkElement {
    Uri DefaultStyleResourceUri { get; set; }
    public static DependencyProperty DefaultStyleResourceUriProperty { get; }
  }
  public sealed class FocusEngagedEventArgs : RoutedEventArgs {
    bool Handled { get; set; }
  }
  public class Frame : ContentControl, INavigate {
    void SetNavigationState(string navigationState, bool suppressNavigate);
  }
  public class InkToolbar : Control {
    InkToolbarButtonFlyoutPlacement ButtonFlyoutPlacement { get; set; }
    public static DependencyProperty ButtonFlyoutPlacementProperty { get; }
    bool IsStencilButtonChecked { get; set; }
    public static DependencyProperty IsStencilButtonCheckedProperty { get; }
    Orientation Orientation { get; set; }
    public static DependencyProperty OrientationProperty { get; }
    event TypedEventHandler<InkToolbar, object> BringStencilIntoViewRequested;
    event TypedEventHandler<InkToolbar, object> EraserWidthChanged;
    event TypedEventHandler<InkToolbar, InkToolbarIsStencilButtonCheckedChangedEventArgs> IsStencilButtonCheckedChanged;
    InkToolbarMenuButton GetMenuButton(InkToolbarMenuKind menu);
  }
  public enum InkToolbarButtonFlyoutPlacement
  public class InkToolbarEraserButton : InkToolbarToolButton {
    InkToolbarEraserKind EraserKind { get; set; }
    public static DependencyProperty EraserKindProperty { get; }
    bool IsClearAllVisible { get; set; }
    public static DependencyProperty IsClearAllVisibleProperty { get; }
    bool IsWidthSliderVisible { get; set; }
    public static DependencyProperty IsWidthSliderVisibleProperty { get; }
    double MaxStrokeWidth { get; set; }
    public static DependencyProperty MaxStrokeWidthProperty { get; }
    double MinStrokeWidth { get; set; }
    public static DependencyProperty MinStrokeWidthProperty { get; }
    double SelectedStrokeWidth { get; set; }
    public static DependencyProperty SelectedStrokeWidthProperty { get; }
  }
  public enum InkToolbarEraserKind
  public class InkToolbarFlyoutItem : ButtonBase
  public enum InkToolbarFlyoutItemKind
  public sealed class InkToolbarIsStencilButtonCheckedChangedEventArgs
  public class InkToolbarMenuButton : ToggleButton
  public enum InkToolbarMenuKind
  public class InkToolbarPenConfigurationControl : Control {
    InkToolbarEraserButton EraserButton { get; }
    public static DependencyProperty EraserButtonProperty { get; }
  }
  public class InkToolbarStencilButton : InkToolbarMenuButton
  public enum InkToolbarStencilKind
  public sealed class RichTextBlock : FrameworkElement {
    TextDecorations TextDecorations { get; set; }
    public static DependencyProperty TextDecorationsProperty { get; }
  }
  public sealed class TextBlock : FrameworkElement {
    TextDecorations TextDecorations { get; set; }
    public static DependencyProperty TextDecorationsProperty { get; }
  }
}
namespace Windows.UI.Xaml.Controls.Maps {
  public sealed class MapBillboard : MapElement
  public sealed class MapContextRequestedEventArgs
  public sealed class MapControl : Control {
    MapProjection MapProjection { get; set; }
    public static DependencyProperty MapProjectionProperty { get; }
    MapStyleSheet StyleSheet { get; set; }
    public static DependencyProperty StyleSheetProperty { get; }
    Thickness ViewPadding { get; set; }
    public static DependencyProperty ViewPaddingProperty { get; }
    event TypedEventHandler<MapControl, MapContextRequestedEventArgs> MapContextRequested;
    IVectorView<MapElement> FindMapElementsAtOffset(Point offset, double radius);
    void GetLocationFromOffset(Point offset, AltitudeReferenceSystem desiredReferenceSystem, out Geopoint location);
    void StartContinuousPan(double horizontalPixelsPerSecond, double verticalPixelsPerSecond);
    void StopContinuousPan();
    IAsyncOperation<bool> TryPanAsync(double horizontalPixels, double verticalPixels);
    IAsyncOperation<bool> TryPanToAsync(Geopoint location);
  }
  public enum MapProjection
  public enum MapStyle {
    Custom = 7,
  }
  public sealed class MapStyleSheet : DependencyObject
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public class FlyoutBase : DependencyObject {
    DependencyObject OverlayInputPassThroughElement { get; set; }
    public static DependencyProperty OverlayInputPassThroughElementProperty { get; }
  }
}
namespace Windows.UI.Xaml.Documents {
  public sealed class Hyperlink : Span {
    FocusState FocusState { get; }
    public static DependencyProperty FocusStateProperty { get; }
    event RoutedEventHandler GotFocus;
    event RoutedEventHandler LostFocus;
    bool Focus(FocusState value);
  }
  public class TextElement : DependencyObject {
    double KeyTipHorizontalOffset { get; set; }
    public static DependencyProperty KeyTipHorizontalOffsetProperty { get; }
    KeyTipPlacementMode KeyTipPlacementMode { get; set; }
    public static DependencyProperty KeyTipPlacementModeProperty { get; }
    double KeyTipVerticalOffset { get; set; }
    public static DependencyProperty KeyTipVerticalOffsetProperty { get; }
    TextDecorations TextDecorations { get; set; }
    public static DependencyProperty TextDecorationsProperty { get; }
    event TypedEventHandler<TextElement, AccessKeyDisplayDismissedEventArgs> AccessKeyDisplayDismissed;
    event TypedEventHandler<TextElement, AccessKeyDisplayRequestedEventArgs> AccessKeyDisplayRequested;
    event TypedEventHandler<TextElement, AccessKeyInvokedEventArgs> AccessKeyInvoked;
  }
}
namespace Windows.UI.Xaml.Input {
  public sealed class AccessKeyManager {
    public static bool AreKeyTipsEnabled { get; set; }
  }
  public enum KeyTipPlacementMode
  public enum XYFocusKeyboardNavigationMode
}
namespace Windows.UI.Xaml.Markup {
  public sealed class XamlMarkupHelper
}
 
namespace Windows.Media.Capture {
  public sealed class AppCaptureDurationGeneratedEventArgs
  public sealed class AppCaptureFileGeneratedEventArgs
  public enum AppCaptureMicrophoneCaptureState
  public sealed class AppCaptureMicrophoneCaptureStateChangedEventArgs
  public enum AppCaptureRecordingState
  public sealed class AppCaptureRecordingStateChangedEventArgs
  public sealed class AppCaptureRecordOperation
  public sealed class AppCaptureServices
  public sealed class AppCaptureState
}
 
namespace Windows.Services.Store {
  public sealed class StoreContext {
    IAsyncOperation<StoreProductResult> FindStoreProductForPackageAsync(IIterable<string> productKinds, Package package);
  }
}

API Removals

namespace Windows.UI.Composition {
  public sealed class CompositionDrawingSurface : CompositionObject, ICompositionSurface {
  }
}

API Additions not yet implemented

The Bluetooth APIs were included to receive feedback from the Developer community.

namespace Windows.Devices.Bluetooth {
  public sealed class BluetoothAdapter
  public sealed class BluetoothDeviceId
  public enum BluetoothError {
    TransportNotSupported = 9,
  }
  public sealed class BluetoothLEDevice : IClosable {
    DeviceAccessInformation DeviceAccessInformation { get; }
    IAsyncOperation<GattDeviceServicesResult> GetGattServicesAsync();
    IAsyncOperation<GattDeviceServicesResult> GetGattServicesAsync(BluetoothCacheMode cacheMode);
    IAsyncOperation<GattDeviceServicesResult> GetGattServicesForUuidAsync(GattUuid serviceUuid);
    IAsyncOperation<GattDeviceServicesResult> GetGattServicesForUuidAsync(GattUuid serviceUuid, BluetoothCacheMode cacheMode);
    IAsyncOperation<DeviceAccessStatus> RequestAccessAsync();
  }
  public enum BluetoothTransportOptions : uint
}
namespace Windows.Devices.Bluetooth.Background {
  public enum BluetoothEventTriggeringMode
  public sealed class GattCharacteristicNotificationTriggerDetails {
    BluetoothError Error { get; }
    BluetoothEventTriggeringMode EventTriggeringMode { get; }
    IVectorView<GattValueChangedEventArgs> ValueChangedEvents { get; }
  }
  public sealed class GattServiceProviderBackgroundInfo
  public sealed class GattServiceProviderRequestActivityInfo
  public enum GattServiceProviderRequestActivityType
  public enum GattServiceProviderRequestAttributeType
  public sealed class GattServiceProviderTriggerDetails
  public enum GattServiceProviderTriggerReason
}
namespace Windows.Devices.Bluetooth.GenericAttributeProfile {
  public sealed class GattCharacteristic {
    IAsyncOperation<GattDescriptorsResult> GetDescriptorsAsync();
    IAsyncOperation<GattDescriptorsResult> GetDescriptorsAsync(BluetoothCacheMode cacheMode);
    IAsyncOperation<GattDescriptorsResult> GetDescriptorsForUuidAsync(GattUuid descriptorUuid);
    IAsyncOperation<GattDescriptorsResult> GetDescriptorsForUuidAsync(GattUuid descriptorUuid, BluetoothCacheMode cacheMode);
    IAsyncOperation<GattWriteResult> WriteValueWithResultAsync(IBuffer value);
    IAsyncOperation<GattWriteResult> WriteValueWithResultAsync(IBuffer value, GattWriteOption writeOption);
  }
  public sealed class GattCharacteristicsResult
  public sealed class GattClientNotificationResult
  public enum GattCommunicationStatus {
    ProtocolError = 2,
  }
  public sealed class GattDescriptor {
    IAsyncOperation<GattWriteResult> WriteValueWithResultAsync(IBuffer value);
  }
  public sealed class GattDescriptorsResult
  public sealed class GattDeviceService : IClosable {
    DeviceAccessInformation DeviceAccessInformation { get; }
    GattSession Session { get; }
    public static IAsyncOperation<GattDeviceService> FromIdAsync(string deviceId, GattSharingMode sharingMode);
    IAsyncOperation<GattCharacteristicsResult> GetCharacteristicsAsync();
    IAsyncOperation<GattCharacteristicsResult> GetCharacteristicsAsync(BluetoothCacheMode cacheMode);
    IAsyncOperation<GattCharacteristicsResult> GetCharacteristicsForUuidAsync(GattUuid characteristicUuid);
    IAsyncOperation<GattCharacteristicsResult> GetCharacteristicsForUuidAsync(GattUuid characteristicUuid, BluetoothCacheMode cacheMode);
    public static string GetDeviceSelector(GattUuid gattUuid);
    public static string GetDeviceSelectorForBluetoothDeviceId(BluetoothDeviceId bluetoothDeviceId);
    public static string GetDeviceSelectorForBluetoothDeviceId(BluetoothDeviceId bluetoothDeviceId, BluetoothCacheMode cacheMode);
    public static string GetDeviceSelectorForBluetoothDeviceIdAndGattUuid(BluetoothDeviceId bluetoothDeviceId, GattUuid gattUuid);
    public static string GetDeviceSelectorForBluetoothDeviceIdAndGattUuid(BluetoothDeviceId bluetoothDeviceId, GattUuid gattUuid, BluetoothCacheMode cacheMode);
    IAsyncOperation<GattDeviceServicesResult> GetIncludedServicesAsync();
    IAsyncOperation<GattDeviceServicesResult> GetIncludedServicesAsync(BluetoothCacheMode cacheMode);
    IAsyncOperation<GattDeviceServicesResult> GetIncludedServicesForUuidAsync(GattUuid serviceUuid);
    IAsyncOperation<GattDeviceServicesResult> GetIncludedServicesForUuidAsync(GattUuid serviceUuid, BluetoothCacheMode cacheMode);
    IAsyncOperation<DeviceAccessStatus> RequestAccessAsync(GattSharingMode sharingMode);
  }
  public sealed class GattDeviceServicesResult
  public sealed class GattLocalCharacteristic
  public sealed class GattLocalCharacteristicParameters
  public sealed class GattLocalDescriptor
  public sealed class GattLocalDescriptorParameters
  public sealed class GattPresentationFormat {
    public static GattPresentationFormat FromParts(byte formatType, int exponent, ushort unit, byte namespaceId, ushort description);
  }
  public static class GattProtocolError
  public sealed class GattPublishedService
  public sealed class GattReadClientCharacteristicConfigurationDescriptorResult {
    IReference<byte> ProtocolError { get; }
  }
  public sealed class GattReadRequest
  public sealed class GattReadRequestedEventArgs
  public sealed class GattReadResponse
  public sealed class GattReadResult {
    IReference<byte> ProtocolError { get; }
  }
  public sealed class GattReliableWriteTransaction {
    IAsyncOperation<GattWriteResult> CommitWithResultAsync();
  }
  public sealed class GattServiceProvider
  public sealed class GattServiceProviderAdvertisingParameters
  public sealed class GattServiceProviderResult
  public enum GattServiceProviderStatus
  public sealed class GattServiceProviderStatusChangedEventArgs
  public enum GattServiceType
  public sealed class GattSession : IClosable
  public enum GattSessionStatus
  public sealed class GattSessionStatusChangedEventArgs
  public enum GattSharingMode
  public sealed class GattSubscribedClient
  public sealed class GattUuid
  public sealed class GattWriteRequest
  public sealed class GattWriteRequestedEventArgs
  public sealed class GattWriteResponse
  public sealed class GattWriteResult
}
namespace Windows.Devices.Bluetooth.Rfcomm {
  public sealed class RfcommDeviceService : IClosable {
    public static IAsyncOperation<RfcommDeviceServicesResult> FromIdWithResultAsync(string deviceId);
  }
  public sealed class RfcommServiceProvider {
    public static IAsyncOperation<RfcommServiceProviderResult> CreateWithResultAsync(RfcommServiceId serviceId);
  }
  public sealed class RfcommServiceProviderResult
}
 

Windows Ink 1: Introduction to Ink and Pen

$
0
0

Using a pen and computer has an interesting history that goes farther back than you’d think. In 1888, the first patent for a “electric stylus device for capturing handwriting” was issued to Elisha Gray for the Telautograph. In fact, pen input was being used 20 years before mouse and GUI input with systems like the Styalator tablet demonstrated by Tim Diamond in the 1950s and the RAND tablet in the 1960s, both could recognize free hand writing and turn it into computer recognizable characters and words.

In 1992, Microsoft made its first major entrance into the pen input space with Windows for Pen Computing and also had the NCR tablet that ran Windows 3.1 with pen input as an option to interact with applications.

New ways to use Windows Ink

In the Windows 10 Anniversary Update, Inking (pen input) has taken front stage. Microsoft recently announced the Surface Studio. An All in One machine, designed to empower the creative process with a 28 inch, Pen enabled, PixelSense screen. With such a large working area for the Pen and the thin profile of the PC, the user can focus on what matters, the art.

In addition to having the work front and center, the user can now use new methods of input, such as the Surface Dial, to leverage your application’s inking features. As a developer, you can leverage the Radial Controller APIs to make accessing those inking features a natural and smooth experience for the user.

Let’s start exploring Windows Ink from two perspectives, the consumer and the developer.

User’s Perspective

On PC with stylus support, the Windows Ink Workspace is front and center in the system tray. For the consumer, this a highly convenient option to quickly access the applications in the Workspace; Sticky Notes, Sketchpad and Screensketch, as you see here:

picture1

Depending on the PC’s pen you’re using, the pen can provide some natural interactions even for you start writing on the screen. Using a Surface Book as an example, the Surface Pen lets you quickly launch an application by clicking the pen’s eraser. One click, a double click or a click and hold can perform three different things. Which action is taken depends on what is set by the user, this option is highly configurable from the PC’s Pen Settings page, as seen here:

picture2

There are other settings you can configure to further customize your experience. Windows 10 already ignores when your palm is touching the screen while you’re writing, but you may want to completely ignore touch altogether. These options can be set on the same settings pane:

picture3

Ignoring touch input while using the pen is disabled by default because there are great simultaneous pen and touch scenarios. A good example of this would be the Windows Ink ruler! You can use one hand for the pen and the other hand to move the ruler on the screen.

Now that’s we’ve taken a high level look at the Windows 10 Anniversary Update’s inking features, let’s switch gears and take a look at it from a developer’s perspective.

Developer’s Perspective

Pen input and handwriting recognition traditionally has needed a specialized developer skillset. You would have to detect the strokes made to the canvas and use complex algorithms to determine what character was written. In the Windows 10 Anniversary Update SDK, this is no longer the case. You can add inking support to your application with just a couple lines of code.

Let’s make a small example that lets the user draw to an area of your UWP (Universal Windows Application) app. This example can be added to any UWP app that is using the Anniversary SDK.

To enable inking, you only need to add the following to your XAML.


<InkCanvas x:Name="inkCanvas" />

That’s it! Where you placed the InkCanvas UIElement is where the user can use a pen and draw on it with the default Ink settings. Here’s what it looks like at runtime after I’ve written a special message:

picture4

The InkCanvas built-in defaults makes it very easy to get started. However, what if you wanted to let the user change the color of the ink, or the thickness of the stroke? You can add this functionality quickly by adding an InkToolbar UIElement to your XAML. The only thing you need to do to wire it up, is tell it what InkCanvas is to be used for:


<InkToolbar x:Name="inkToolbar" TargetInkCanvas="{x:Bind inkCanvas}" />

Note: If you see a XAML designer error when you add the InkToolbar, you can safely ignore this as it is a known issue that is being worked on. Your code will run fine.

Let’s rerun our test app and see what this looks after using a couple of the InkToolbar’s default tools; the ruler and changing the ink color:

picture5

This is all you need to having inking enabled in the app, however you might want to save the user’s strokes so that they can be saved and reloaded at another time.

Saving and Loading Ink

You can embed the ink data within a GIF file so that you can save and load the user’s work. This is easily done using the InkPresenter, which is available as a read-only property of the InkCanvas.

Here’s an example of getting all the ink that’s on the canvas and saving it to a file:


        private async Task SaveInkAsync()
        {
            if (inkCanvas.InkPresenter.StrokeContainer.GetStrokes().Count > 0)
            {
                // Select a StorageFile location and set some file attributes
                var savePicker = new Windows.Storage.Pickers.FileSavePicker();
                savePicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;
                savePicker.FileTypeChoices.Add("Gif with embedded ISF", new List<string> {".gif"});

                var file = await savePicker.PickSaveFileAsync();

                if (null != file)
                {
                    using (IRandomAccessStream stream = await file.OpenAsync(FileAccessMode.ReadWrite))
                    {
                        // This single method will get all the strokes and save them to the file
                        await inkCanvas.InkPresenter.StrokeContainer.SaveAsync(stream);
                    }
                }
            }
        }

Then, the next time the user wants to load in an old drawing, or maybe you want to properly resume an application that was terminated, you only need to load that file back into the canvas. To do this, it’s just as easy as saving it:


        private async Task LoadInkAsync()
        {
            // Open a file picker
            var openPicker = new Windows.Storage.Pickers.FileOpenPicker();
            openPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;

            // filter files to show both gifs (with embedded isf) and isf (ink) files
            openPicker.FileTypeFilter.Add(".gif");
            openPicker.FileTypeFilter.Add(".isf");

            var file = await openPicker.PickSingleFileAsync();

            if (null != file)
            {
                using (var stream = await file.OpenSequentialReadAsync())
                {
                    // Just like saving, it's only one method to load the ink into the canvas
                    await inkCanvas.InkPresenter.StrokeContainer.LoadAsync(stream);
                }
            }
        }

To see this code, and many other demos, take a look at the SimpleInk demo from the official Universal Windows Platform examples Github page.

What’s next?

Getting started with Windows Ink is quick and easy. However, you can also create some highly customized inking applications. In the next Windows Ink series post, we’ll dig deeper into the InkPresenter, Pen Attributes, Custom Pens, Custom InkToolBar and explore a more complex ink data  scenario that enables sharing and printing!

Resources

Windows Ink 2: Digging Deeper with Ink and Pen

$
0
0

In the last post, we explored a brief history of pen computing and introduced you to how easy it is to get started with Windows Ink in your Universal Windows Platform app. You saw that you can enable inking by adding a single line of code, an InkCanvas, to your app to enable inking. You also saw that adding another single line of code, the InkToolbar, gives the user additional pen-related tools like pen-stroke color and stroke type.

In this post, we’ll dig deeper into how we can further customize the pen and ink experience to make your application a delightful inking experience for the user. Let’s build a Coloring Book application!

Customizing The Inking Experience

Getting Started

To get started, let’s put in an InkCanvas on the page:


<InkCanvas x:Name="myInkCanvas"/>

By default, the InkCanvas’s input is set to only accept strokes from a Pen. However, we can change that by setting the InputDeviceTypes property of the InkCanvas’s InkPresenter. In the page constructor, we want to configure the InkCanvas so that it works for pen, mouse and touch:


myInkCanvas.InkPresenter.InputDeviceTypes = Windows.UI.Core.CoreInputDeviceTypes.Pen 
                | Windows.UI.Core.CoreInputDeviceTypes.Mouse 
                | Windows.UI.Core.CoreInputDeviceTypes.Touch;

As we did in the last article, we’ll add an InkToolbar and bind it to myInkCanvas, but this time we’re going to put it within a CommandBar. This is so we can keep it next the other buttons that we’ll add later, like Save and Share.


<CommandBar Name="myCommandBar" IsOpen="True" >
    <CommandBar.Content>
        <InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}"/>
    </CommandBar.Content>
</CommandBar>

Note: If you see a XAML designer error when you add the InkToolbar, you can safely ignore this as it is a known issue that is being worked on. Your code will run fine.

However, this time, we also want to provide the user with some additional InkToolbar options. We have two main ways to do this using the InkToolbar, we can use a

  • Built-in InkToolbar pen button
  • Custom InkToolbar pen button

Built-in InkToolbar pens

Let’s start with an example of a built-in option, the InkToolbarBallPointPenButton. This is an ‘out-of-the-box’ InkToolbar button that, when selected in the InkToolbar, activates the BallPointPen. To add this, you place it within the InkToolbar’s content, like so:


<CommandBar Name="myCommandBar" IsOpen="True" >
    <CommandBar.Content>
        <InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}">
            <InkToolbarBallpointPenButton Name="penButton" />
        </InkToolbar>
    </CommandBar.Content>
</CommandBar>

If you ran the app now, your InkToolbar would look like this:

picture1

Custom InkToolbar Pens

Creating a custom pen is rather straightforward and requires very little code. Let’s start with the basic requirement: We need to create a class that inherits from InkToolbarCustomPen and give it some attributes that define how it will draw.  Let’s take this step by step and make a custom highlighter marker.

First, let’s add a new class to your project.  Name the class “MarkerPen,” add the following using statements and inherit from InkToolbarCustomPen:


using Windows.UI;
using Windows.UI.Input.Inking;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

class MarkerPen : InkToolbarCustomPen
{
}

In this class, we only need to override the CreateInkDrawingAttributesCore method. Add the following method to the class now:


protected override InkDrawingAttributes CreateInkDrawingAttributesCore(Brush brush, double strokeWidth)
{
}

Within that method we can start setting some drawing attributes. This is done by making an instance of InkDrawingAttributes and setting some properties. Here are the attributes I’d like the pen to have:

  • Act like a highlighter
  • Has a round pen tip shape
  • Has a red stroke color as the default color
  • Be twice as thick as the user’s stroke

Here’s how we can fulfill those requirements:


InkDrawingAttributes inkDrawingAttributes = new InkDrawingAttributes();

// Set the PenTip (can also be a rectangle)
inkDrawingAttributes.PenTip = PenTipShape.Circle;

// Set the default color to Red 
SolidColorBrush solidColorBrush = brush as SolidColorBrush;
inkDrawingAttributes.Color = solidColorBrush?.Color ?? Colors.Red;

// Make sure it draws as a highlighter
inkDrawingAttributes.DrawAsHighlighter = true;

// Set the brush stroke
inkDrawingAttributes.Size = new Windows.Foundation.Size(strokeWidth * 2, strokeWidth * 2);

return inkDrawingAttributes;

That’s it, your custom pen is done. Here’s the completed class:


using Windows.UI;
using Windows.UI.Input.Inking;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

class MarkerPen : InkToolbarCustomPen
{
        protected override InkDrawingAttributes CreateInkDrawingAttributesCore(Brush brush, double strokeWidth)
        {
            InkDrawingAttributes inkDrawingAttributes = new InkDrawingAttributes();
            inkDrawingAttributes.PenTip = PenTipShape.Circle;
            SolidColorBrush solidColorBrush = brush as SolidColorBrush;
            inkDrawingAttributes.Color = solidColorBrush?.Color ?? Colors.Red;
            inkDrawingAttributes.DrawAsHighlighter = true;
            inkDrawingAttributes.Size = new Windows.Foundation.Size(strokeWidth * 2, strokeWidth * 2);
            return inkDrawingAttributes;
        }
}

Now, let’s go back to the page where you have your InkToolbar and InkCanvas. We want to create Resources section for your page that contains a StaticResource instance of the custom pen. So, just above the root Grid element, add the following Resources code:


<Page ...> 

    <Page.Resources>
        <local:MarkerPen x:Key="MarkerPen"/>
    </Page.Resources>

    <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
...
    </Grid>
</Page>

A quick note about XAML Resources: The page’s resources list is a key/value dictionary of objects that you can reference using the resource’s key. We’ve created an instance of our MarkerPen class, local:MarkerPen, and given it a key value of “MarkerPen” (if you want to learn more about XAML resources, see here).

We can now use that key in a InkToolbarCustomPenButton’s CustomPen property. This is better explained by the code. Let’s break it down:

In your InkToolbar, add an InkToolbarCustomPen and give it a name:


<InkToolbar>
   <InkToolbarCustomPenButton Name="markerButton"></InkToolbarCustomToolButton>
</InkToolbar>

The InkToolbarCustomPen has a CustomPen property:


<InkToolbarCustomPenButton Name="markerButton" CustomPen="">

We can now set that CustomPen property using the key of our resource:


<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}">

Now, let’s set the SymbolIcon for the button:


<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}">
    <SymbolIcon Symbol="Highlight" />
</InkToolbarCustomPenButton>

Next, let’s add an InkToolbarPenConfigurationControl:


<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}">
    <SymbolIcon Symbol="Highlight" />
    <InkToolbarCustomPenButton.ConfigurationContent>
         <InkToolbarPenConfigurationControl />
    </InkToolbarCustomPenButton.ConfigurationContent>
</InkToolbarCustomPenButton>

Let’s take a look at what the InkToolbarPenConfigurationControl does for you. Even with a custom implementation of a pen, you still get to use the out-of-the-box Windows Ink components. If the user clicks on your pen after it’s selected, they’ll get a fly-out containing options to change the color and the size of the pen!

However, there’s one little tweak we want to make. By default, you get Black and White as the only colors in the flyout:

picture1

We want a lot of colors, and fortunately, the BallpointPenButton you added earlier has a palette full of colors. We can just use that same palette for our custom pen by binding to it:


<InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}" Palette="{x:Bind penButton.Palette}" >

Now, here’s what the pen configuration control looks after binding the Palette:

picture3

Whew, okay, the toolbar is coming along nicely! Here’s what we have so far for our CommandBar:


<CommandBar Name="myCommandBar" IsOpen="True">
    <CommandBar.Content>
        <InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}">
            <InkToolbarBallpointPenButton Name="penButton" />
            <InkToolbarCustomPenButton Name="markerButton" CustomPen="{StaticResource MarkerPen}" Palette="{x:Bind penButton.Palette}" >
                <SymbolIcon Symbol="Highlight" />
                <InkToolbarCustomPenButton.ConfigurationContent>
                    <InkToolbarPenConfigurationControl />
                </InkToolbarCustomPenButton.ConfigurationContent>
            </InkToolbarCustomPenButton>
        </InkToolbar>
    </CommandBar.Content>
</CommandBar>

Now, let’s start adding some commands.

Custom InkToolbar Tool Buttons

The first thing you’d really want in a drawing application is the ability to undo something. To do this we’ll want to add another button to the toolbar; this is easily done using an InkToolbarCustomToolButton. If you’re familiar with adding buttons to a CommandBar, you’ll feel right at home.

In your InkToolbar, add an InkToolbarCustomToolButton and give it a name, “undoButton.”


<InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}" Palette="{x:Bind penButton.Palette}" >
...
    <InkToolbarCustomToolButton Name="undoButton"></InkToolbarCustomToolButton>
</InkToolbar>

The button has your familiar button properties, such as a Click event and supporting a SymbolIcon for content, so let’s add those as well.

Here’s what your XAML should look like:


<InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}" Palette="{x:Bind penButton.Palette}">
...
    <InkToolbarCustomToolButton Name="undoButton" Click="Undo_Click" >
        <SymbolIcon Symbol="Undo"/>
    </InkToolbarCustomToolButton>
</InkToolbar>

Now, let’s go to the button’s click event handler.  Here we can do the following to undo strokes that were applied to the InkPresenter, here are the steps:

First, make sure you add the following using statement to the code-behind:


using Windows.UI.Input.Inking;

Then get all the strokes in the InkPresenter’s StrokeContainer:


IReadOnlyList<InkStroke> strokes = myInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

Next, verify that there are strokes to undo before proceeding:


if (strokes.Count > 0)

If there are strokes, select the last one in the container:


strokes[strokes.Count - 1].Selected = true;

Finally, delete that selected stroke using DeleteSelected():


myInkCanvas.InkPresenter.StrokeContainer.DeleteSelected();

As you can see, it’s pretty easy to get access to the strokes that were made by the user and just as easy to remove a stroke. Here is the complete event handler:


private void Undo_Click(object sender, RoutedEventArgs e)
{
    // We can get a list of the strokes that are in the InkPresenter
    IReadOnlyList<InkStroke> strokes = myInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

    // Make sure there are strokes to undo
    if (strokes.Count > 0)
    {
       // select the last stroke
       strokes[strokes.Count - 1].Selected = true;

       // Finally, delete the stroke
       myInkCanvas.InkPresenter.StrokeContainer.DeleteSelected();
    }
}

Final InkCanvas configuration

Before we conclude the drawing logic, we need to make sure the page loads with some InkDrawingAttributes presets and InkPresenter configuration. To do this, we can hook into the InkCanvas’s Loaded event.

We can do this in the XAML:


<InkToolbar x:Name="myInkToolbar" TargetInkCanvas="{x:Bind myInkCanvas}" Palette="{x:Bind penButton.Palette}" Loaded="InkToolbar_Loaded">

The attributes are set in a similar way that we set them for the custom pen, instantiate an InkDrawingAttributes object and set some properties. However, this time, we’re passing those attributes to the InkPresenter.

Additionally, a few other things thing should be addressed:

  • Give the custom pen the same color palette as the ballpoint pen
  • Set the initial active tool
  • Make sure that users can also use the mouse

Here’s the code for the InkCanvas’s Loaded event handler:


private void InkToolbar_Loaded(object sender, RoutedEventArgs e)
{
   // Create an instance of InkDrawingAttributes
    InkDrawingAttributes drawingAttributes = new InkDrawingAttributes();

    // We want the pen pressure to be applied to the user's stroke
    drawingAttributes.IgnorePressure = false;

    // This will set it to that the ink stroke will use a Bezier curve instead of a collection of straight line segments
    drawingAttributes.FitToCurve = true;

    // Update the InkPresenter with the attributes
    myInkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);

    // Set the initial active tool to our custom pen
    myInkToolbar.ActiveTool = markerButton;

    // Finally, make sure that the InkCanvas will work for a pen, mouse and touch
    myInkCanvas.InkPresenter.InputDeviceTypes = Windows.UI.Core.CoreInputDeviceTypes.Pen 
                | Windows.UI.Core.CoreInputDeviceTypes.Mouse 
                | Windows.UI.Core.CoreInputDeviceTypes.Touch;
}

Saving, Sharing and Loading

Now that you’ve got a decent working area, we want to be able to save, load and share the user’s work. In the last post, we showed a simple way to save and load the canvas. However, in our Coloring Book app, we want to have the image and the ink data saved separately so that we can easily share the image for display and sharing purposes, but save, load and edit inking data as well.

Saving Ink Data

As we did in the last post, you can save the ink strokes to a file using the StrokeContainer’s SaveAsync method. What we’ll do differently here is right after we’ve saved the ink file, we’ll also save a parallel image file in the cache. Although we’re able to embed the stroke data into the gif we saved, having a temporary image stored in the cache makes sharing and displaying the image in the app more convenient.

So, at the end of your save button’s click handler, you want to create a new (or get an existing) StorageFile for the image:


// Save inked image.
StorageFile myInkedImageFile = await folder.CreateFileAsync(Constants.inkedImageFile, CreationCollisionOption.ReplaceExisting);
await Save_InkedImagetoFile(myInkedImageFile);

Next, we pass the myInkedImageFile StorageFile reference to the Save_InkedImageToFile method, which saves the image to the file:


private async Task Save_InkedImagetoFile(StorageFile saveFile)
{
    if (saveFile != null)
    {
…
        using (var outStream = await saveFile.OpenAsync(FileAccessMode.ReadWrite))
        {
            await Save_InkedImageToStream(outStream);
        }
…
     }
}

And finally, we get that bitmap from the canvas into the file in the Save_InkedImageToStream method; this is where we leverage Win2D to get a great looking bitmap from the canvas:


private async Task Save_InkedImageToStream(IRandomAccessStream stream)
{
    var file = await StorageFile.GetFileFromApplicationUriAsync(((BitmapImage)myImage.Source).UriSource);

    CanvasDevice device = CanvasDevice.GetSharedDevice();

    var image = await CanvasBitmap.LoadAsync(device, file.Path);

    using (var renderTarget = new CanvasRenderTarget(device, (int)myInkCanvas.ActualWidth, (int)myInkCanvas.ActualHeight, image.Dpi))
    {
        using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
        {
            ds.Clear(Colors.White); 
            ds.DrawImage(image, new Rect(0, 0, (int)myInkCanvas.ActualWidth, (int)myInkCanvas.ActualHeight));
            ds.DrawInk(myInkCanvas.InkPresenter.StrokeContainer.GetStrokes());
         }

         await renderTarget.SaveAsync(stream, CanvasBitmapFileFormat.Png);
    }
}

You might ask, why is there a separate method for getting the stream instead of doing it in one place? The first reason is that we want to be a responsible developer and make sure our method names define what action the methods perform. But more importantly, we want to reuse this method later to share the user’s art. With a stream, it’s not only easier to share, you can even send the image to a printer.

Sharing the result

Now that the image is saved, we can share it. The approach here is the same as other UWP sharing scenarios. You want to use the DataTransferManager; you can find many example of how to use this here in the Official UWP samples on GitHub.

For the purposes of this article, we’ll focus only on the DataTransferManager’s DataRequested method. You can see the full sharing code for this here in the Coloring Book demo on GitHub). This is where the Save_InkedImageToStream method gets to be reused!


private async void DataRequested(DataTransferManager sender, DataRequestedEventArgs e)
{
    DataRequest request = e.Request;
    DataRequestDeferral deferral = request.GetDeferral();
    request.Data.Properties.Title = "A Coloring Page";
    request.Data.Properties.ApplicationName = "Coloring Book";
    request.Data.Properties.Description = "A coloring page sent from my Coloring Book app!";
    using (InMemoryRandomAccessStream inMemoryStream = new InMemoryRandomAccessStream())
    {
        await Save_InkedImageToStream(inMemoryStream);
         request.Data.SetBitmap(RandomAccessStreamReference.CreateFromStream(inMemoryStream));
    }
            deferral.Complete();
}

Loading Ink Data from a file

In our Coloring Book app, we want the user to continue working on previous drawings as if they never stopped. We’re able to save the ink file and capture and save the image of the work, but we also need to load the ink data properly.

In the last post we covered how to load up the stroke from the file; let’s review this now.


// Get a reference to the file that contains the inking stroke data
StorageFile inkFile = await folder.GetFileAsync(Constants.inkFile);

if (inkFile != null)
{
    IRandomAccessStream stream = await inkFile.OpenAsync(Windows.Storage.FileAccessMode.Read);

    using (var inputStream = stream.GetInputStreamAt(0))
    {
        // Load the strokes back into the StrokeContainer
        await myInkCanvas.InkPresenter.StrokeContainer.LoadAsync(stream);
    }

    stream.Dispose();
}

That’s all there is to loading sketch’s ink data. All the strokes, and the ink’s attributes, will be loaded into the InkCanvas and the user can continue working on his or her creation.

In the next post, we’ll look at some other real-world applications of Windows Ink and how inking can empower educational and enterprise applications. We’ll also take a look at some of the new hardware and APIs available that make using Windows Ink a go-to item for design professionals.

Resources

Windows at Microsoft Connect(); // 2016

$
0
0

During the Connect(); // 2016, the Windows team talked about the Universal Windows Platform and about some of the great tooling improvements that have happened in recent months.

Kevin Gallo, VP and Program Manager Director of the Windows Platform team, was joined by Mazhar Mohammed and Dave Alles for an hour-long panel discussion and Q&A on Channel 9. Mazhar is the Director of Partner Services of for the Windows Store team, and Dave is the Group Program Manager for the team that created the Maps app for Windows.

Check out the video below to see the interesting conversation between leaders of the teams that create the Windows Platform and the Store, and one of the teams that uses both in creating a major Windows app. Kevin talks about why developers should target UWP, while Mazhar discusses the constant improvements in the Store and Dev Center. Dave talks about what a 1st-party app thinks about new features, and interacts with the platform team to provide feedback and request functionality. This is actually very similar to the way external developers provide input and feedback via UserVoice!

In the keynote, Stacey Doerr joined Scott Hanselman on stage to talk about automated UI testing of Windows apps using WinAppDriver and Appium. This isn’t just for testing UWP apps, but all Windows applications, including applications built in VB6 or Delphi and more. For more details about WinAppDriver, check out Yosef’s in-depth blog post: Windows Application Driver for PC integrates with Appium. Scott also has a great overview on getting started tested on Windows with Appium.

The Windows teams also did the following four presentations to provide deeper dives into the Desktop Bridge and a set of new and improved developer tooling capabilities:

The post Windows at Microsoft Connect(); // 2016 appeared first on Building Apps for Windows.

Windows Ink 3: Beyond Doodling

$
0
0

In the first post in this series, we took a quick look at Windows Ink using the InkCanvas and saw that it can be as simple as one line of code to add inking support to your application. In the second post, we show you how to customize the Ink experience in your app with InkToolbarCustomPen and InkToolbarCustomToolButton, in addition to the out-of-the box items like InkToolbarBallpointPenButton and InkToolbarPenConfigurationControl.

In both of those explorations, we stayed within the context of a drawing style application. In today’s post, we will look at how Windows Ink can be used to bring the natural input experience of using a pen to other types of applications and scenarios.

Pen input can be useful in a majority of applications that require some sort of user input. Here are a few examples of such a scenario:

  • Healthcare: Doctors, nurses, mental health professionals
    • A digital Patient Chart, allowing a medical professional to keep using the efficient natural note keeping of medical shorthand alongside accurate data entry.
  • School: Teachers, students, and administrators
    • A student could take an exam using Windows Ink on a digital exam and the teacher could mark up on that actual exam as if it were paper.
  • Field services: Police, fire, utility engineers
    • Detectives generally keep a small notepad with them to record investigative details. Using ink to input these details allows the notes to be digitally searchable, this allows for faster situational awareness and AI interpretation.
  • Music: composers, musicians
    • Writing notation digitally with a pen combines the world of natural input with the power of digital music processing

Let’s explore two of those possibilities: music and healthcare.

A Music Scenario

Music composition has traditionally been a pen and paper experience. You may or may not have paper with the music staves already printed on it, but in the end, the composer is the one who writes down the notes, key signatures, and other musical notation for a musician to play. Composers have been trained and have years of experience writing music on paper.

What if an application uses a digital pen and the screen as the main method for the composer to create music? The pen input would be a natural way to input the information, but also gain the advantages of having software process that information.

An example of this processing would be for validation of the musical notation; it would also offer a way for the music to be played back immediately after entering that information. There have been many programs in the past that allow for music notation to be entered and played back, but using a pen instead of a keyboard and mouse brings this to a new, natural, level.

A Healthcare Scenario

Healthcare professionals have long used pen and paper to record and convey important information. Sometimes this information is written using a medical shorthand on patient charts. This shorthand contains a lot of information in a smaller area so medical professionals can read a patient’s chart quickly.

However, we also have information that needs to fully written out, like a patient’s name or instructions to a patient for follow-up. This kind of information should be in the form of text clearly readable by anyone and usable for data entry.

We can fulfill both of these requirements with Windows Ink. For the notation and shorthand, we can record the ink strokes as we did previously in the sketching app examples. For the text entry, you can convert the ink using handwriting recognition.

Let’s make a small Medical Chart demo app to see how this is done.

Simple Doctor’s notes app

To show how you can implement enterprise features, let’s use Handwriting Recognition! You can easily get the user’s stroke as text using the InkCanvas and just a few lines of code. This is all built-into the SDK, no extraneous coding or specialized skillset required.

Let’s start with a File > New UWP app and on the MainPage, let’s make three Grid rows. The top two rows will contain two different InkCanvas objects and the last row is for a CommandBar with a save button.

The second row’s InkCanvas will be for the doctor’s handwritten using shorthand. It is more like a sketch app and is tied to an InkToolbar. The ink will be pressure-sensitive and can be further altered using the InkToolbar. You can go back to the last post in this series to see how to do this.

Here’s a quick sketch of what the page layout should be:

picture1

Now that we have a general page layout, let’s focus on the top InkCanvas first. This is the one we’ll use for handwriting recognition for the patient’s name. We want the ink to be plain and clear, so we don’t want an InkToolbar for this InkCanvas.

This code for this row is:


<Grid Grid.Row="1"
     <InkCanvas x:Name="NameInkCanvas" />
</Grid>

Now let’s look at the second row’s InkCanvas. This is the one we want to have an InkToolbar for so the notes can have a rich ink experience. Here’s what that implementation looks like:


<Grid>
    <InkCanvas x:Name="NotesInkCanvas" />

    <InkToolbar TargetInkCanvas="{x:Bind NotesInkCanvas}"
                HorizontalAlignment="Right"
                VerticalAlignment="Top" />
</Grid>

There are a couple other little things we want to add, for example the TextBlock at the top of the page where the patient’s name would appear after the handwriting recognition is complete. Let’s take a look at the entire page with all the parts in place:


<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <Grid.RowDefinitions>
            <RowDefinition />
            <RowDefinition />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>

        <!-- Top row for handwriting recognition of the patient name -->
        <Grid x:Name="PatientInfoGrid">
            <Grid.RowDefinitions>
                <RowDefinition Height="Auto" />
                <RowDefinition />
                <RowDefinition Height="Auto" />
            </Grid.RowDefinitions>

            <TextBlock x:Name="PatientNameTextBlock"
                       Text="Patient Name"
                       Style="{StaticResource TitleTextBlockStyle}"
                       HorizontalAlignment="Center" />

            <Grid Grid.Row="1"
                  BorderThickness="2"
                  BorderBrush="#FF9F9F9F">
                <InkCanvas x:Name="NameInkCanvas" />
            </Grid>

            <Button x:Name="RecognizeHandwritingButton"
                    Content="Write patient name in box above and click here to complete"
                    Click="RecognizeHandwritingButton_OnClick"
                    Grid.Row="2"
                    HorizontalAlignment="Center"
                    Margin="5" />
        </Grid>

        <!-- Second row for the doctor's notes -->
        <Grid x:Name="NotesGrid"
              Grid.Row="1">
            <Grid.RowDefinitions>
                <RowDefinition Height="Auto" />
                <RowDefinition />
                <RowDefinition Height="Auto" />
            </Grid.RowDefinitions>

            <TextBlock Text="Notes"
                       Style="{StaticResource SubtitleTextBlockStyle}"
                       HorizontalAlignment="Center" />

            <Grid Grid.Row="1"
                  BorderThickness="2"
                  BorderBrush="#FF9F9F9F">
                <InkCanvas x:Name="NotesInkCanvas" />

                <InkToolbar TargetInkCanvas="{x:Bind NotesInkCanvas}"
                            HorizontalAlignment="Right"
                            VerticalAlignment="Top" />
            </Grid>
        </Grid>

        <CommandBar Grid.Row="2">
            <AppBarButton x:Name="SaveChartButton"
                          Icon="Save"
                          Label="Save Chart"
                          Click="SaveChartButton_OnClick"/>
        </CommandBar>
    </Grid>

With the front end done, let’s look at the code-behind and examine the InkCanvas setup and button click event handlers. In the page constructor, we set up some inking attributes for both InkCanvases (put this code after InitializeComponent in the page constructor):


// Setup the top InkCanvas
NameInkCanvas.InkPresenter.InputDeviceTypes =
                Windows.UI.Core.CoreInputDeviceTypes.Mouse |
                Windows.UI.Core.CoreInputDeviceTypes.Pen;

NameInkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(new InkDrawingAttributes
{
     Color = Windows.UI.Colors.Black,
     IgnorePressure = true,
     FitToCurve = true
});

// Setup the doctor's notes InkCanvas
NotesInkCanvas.InkPresenter.InputDeviceTypes =
                Windows.UI.Core.CoreInputDeviceTypes.Mouse |
                Windows.UI.Core.CoreInputDeviceTypes.Pen;

NotesInkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(new InkDrawingAttributes
{
    IgnorePressure = false,
    FitToCurve = true
});

To get the patient’s name into the chart, the healthcare worker writes the name in the top InkCanvas and presses the RecognizeHandwritingButton. That button’s click handler is where we do the recognition work. In order to perform handwriting recognition, we use the InkRecognizerContainer object.


var inkRecognizerContainer = new InkRecognizerContainer();

With an instance of InkRecognizerContainer, we call RecognizeAsync and pass it the InkPresenter’s StrokeContainer and InkRecognitionResult — all to tell it to use all the ink strokes.


// Recognize all ink strokes on the ink canvas.
var recognitionResults = await inkRecognizerContainer.RecognizeAsync(
                    NameInkCanvas.InkPresenter.StrokeContainer,
                    InkRecognitionTarget.All);

This will return a list of InkRecognitionResult which you can iterate over and call GetTextCandidates in each iteration. The result of GetTextCandidates is a list of strings that the recognition engine thinks best matches the ink strokes. Generally, the first result is the most accurate, but you can iterate over candidates to find the best match.

Here’s the implementation of the doctor’s note app; you can see that it just uses the first candidate to demonstrate the approach.


 // Iterate through the recognition results, this will loop once for every word detected
foreach (var result in recognitionResults)
{
    // Get all recognition candidates from each recognition result
    var candidates = result.GetTextCandidates();

    // For the purposes of this demo, we'll use the first result
    var recognizedName = candidates[0];

    // Concatenate the results
    str += recognizedName + " ";
}

Here is the full event handler:


private async void RecognizeHandwritingButton_OnClick(object sender, RoutedEventArgs e)
{
    // Get all strokes on the InkCanvas.
    var currentStrokes = NameInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

    // Ensure an ink stroke is present.
    if (currentStrokes.Count < 1)
    {
        await new MessageDialog("You have not written anything in the canvas area").ShowAsync();
        return;
    }

    // Create a manager for the InkRecognizer object used in handwriting recognition.
    var inkRecognizerContainer = new InkRecognizerContainer();

    // inkRecognizerContainer is null if a recognition engine is not available.
    if (inkRecognizerContainer == null)
    {
        await new MessageDialog("You must install handwriting recognition engine.").ShowAsync();
        return;
    }

    // Recognize all ink strokes on the ink canvas.
    var recognitionResults = await inkRecognizerContainer.RecognizeAsync(
                    NameInkCanvas.InkPresenter.StrokeContainer,
                    InkRecognitionTarget.All);

    // Process and display the recognition results.
    if (recognitionResults.Count < 1)
    {
        await new MessageDialog("No recognition results.").ShowAsync();
        return;
    }

    var str = "";

    // Iterate through the recognition results, this will loop once for every word detected
    foreach (var result in recognitionResults)
    {
        // Get all recognition candidates from each recognition result
        var candidates = result.GetTextCandidates();

        // For the purposes of this demo, we'll use the first result
        var recognizedName = candidates[0];

        // Concatenate the results
        str += recognizedName + " ";
    }

    // Display the recognized name
    PatientNameTextBlock.Text = str;

    // Clear the ink canvas once recognition is complete.
    NameInkCanvas.InkPresenter.StrokeContainer.Clear();
}

Last, although we covered this in detail in the last post, let’s review how to save the doctor’s notes via InkCanvas ink strokes to a GIF file with embedded ink data:


private async void SaveChartButton_OnClick(object sender, RoutedEventArgs e)
{
    // Get all strokes on the NotesInkCanvas.
    var currentStrokes = NotesInkCanvas.InkPresenter.StrokeContainer.GetStrokes();

    // Strokes present on ink canvas.
    if (currentStrokes.Count > 0)
    {
        // Initialize the picker.
        var savePicker = new FileSavePicker();
        savePicker.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;
        savePicker.FileTypeChoices.Add("GIF with embedded ISF", new List<string>() { ".gif" });
        savePicker.DefaultFileExtension = ".gif";

        // We use the patient's name to suggest a file name
        savePicker.SuggestedFileName = $"{PatientNameTextBlock.Text} Chart";

        // Show the file picker.
        var file = await savePicker.PickSaveFileAsync();

        if (file != null)
        {
            // Prevent updates to the file until updates are finalized with call to CompleteUpdatesAsync.
            CachedFileManager.DeferUpdates(file);

            // Open a file stream for writing
            using (var stream = await file.OpenAsync(FileAccessMode.ReadWrite))
            using (var outputStream = stream.GetOutputStreamAt(0))
            {
                await NotesInkCanvas.InkPresenter.StrokeContainer.SaveAsync(outputStream);
                await outputStream.FlushAsync();
            }

            // Finalize write so other apps can update file.
            var status = await CachedFileManager.CompleteUpdatesAsync(file);

            if (status == FileUpdateStatus.Complete)
            {
                        PatientNameTextBlock.Text += " (saved!)";
            }
        }
    }
}

Here’s what the app looks like at runtime:

picture2

This is just a simple example of how to combine different uses of Windows Ink, but it demonstrates the usefulness of using Windows Ink in an enterprise scenario and that it’s much more than just a doodling tool.

The patient’s name was recognized and placed in the TextBlock at the top of the app and the doctor’s notes on the bottom can be saved to a file and reloaded exactly as it was written.

Here’s what the doctor’s notes file looks like in Windows File Explorer after it’s been saved. It’s a GIF but also has the embedded ink data that you can load back into the app as ink strokes.

picture3

What’s next?

Think about how you can add inking support to your next application applications. How can adding natural use input with a pen help your user enter data in a seamless and delightful manner? You can add Inking support with just a few lines of code and bring the Windows Ink experience to your users.

We look forward to the exciting app ideas and scenarios you create using Windows Ink. Let us know what you create by leaving us a comment below, sending us a tweet or post on our Facebook page.

Resources

The post Windows Ink 3: Beyond Doodling appeared first on Building Apps for Windows.

Viewing all 623 articles
Browse latest View live