Sunday 29 September 2013

Android TextToSpeech API with Xamarin: it talks too!

A recent post covered Apple's new text-to-speech API in iOS 7, but forgot to mention that Android has actually had this capability for a while! It's really easy to add text-to-speech to a Xamarin.Android app: just implement TextToSpeech.IOnInitListener (which is a single method: OnInit) then create a new TextToSpeech instance:

speaker = new TextToSpeech (this, this);

and call Speak:

void Speak(string text) {
   var p = new Dictionary ();
   speaker.Speak (text, QueueMode.Flush, p);
}

The TaskyPro sample code has been updated so that both the iOS and Android apps have a Speak button. The Android app looks like this:

Check out this cool TtsSetup sample for Xamarin (via StackOverflow) for more details on how to customize the Android TextToSpeech API.

Friday 27 September 2013

Built-in Barcode Scanning with iOS7 and Xamarin: MonkeyScan!

Another new iOS 7 feature is built-in support for barcode-scanning via the AVFoundation AVCaptureDevice API. Back in 2012 I threw together MonkeyScan using Windows Azure Services and the ZXing barcode scanning library. For iOS 7 I've updated the code to use the Azure Mobile Services Component and the new iOS 7 barcode scanning API instead.

The app looks like this when scanning a PassKit pass:

The code that sets up an AVCaptureDevice for 'metadata capture' (as opposed to capturing an image or video, I guess :) is shown below:

bool SetupCaptureSession () {
   session = new AVCaptureSession();
   AVCaptureDevice device = 
      AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
   NSError error = null;
   AVCaptureDeviceInput input = 
      AVCaptureDeviceInput.FromDevice(device, out error);

   if (input == null)
      Console.WriteLine("Error: " + error); 
   else
      session.AddInput(input);

   AVCaptureMetadataOutput output = new AVCaptureMetadataOutput();
   var dg = new CaptureDelegate(this);
   output.SetDelegate(dg, MonoTouch.CoreFoundation.DispatchQueue.MainQueue);
   session.AddOutput(output); // MUST add output before setting metadata types!

   output.MetadataObjectTypes = new NSString[] 
      {AVMetadataObject.TypeQRCode, AVMetadataObject.TypeAztecCode};

   AVCaptureVideoPreviewLayer previewLayer = new AVCaptureVideoPreviewLayer(session);
   previewLayer.Frame = new RectangleF(0, 0, 320, 290);
   previewLayer.VideoGravity = AVLayerVideoGravity.ResizeAspectFill.ToString();
   View.Layer.AddSublayer (previewLayer);

   session.StartRunning();
   return true;
}

You can specify specific barcodes to recognize or use output.AvailableMetadataObjectTypes to process all supported types.

...and it speaks!
Since the app now requires iOS 7, it can also use the new AVSpeechSynthesizer to speak the scan result as well (see previous post).

if (valid && !reentry) {
   View.BackgroundColor = UIColor.Green;
   Speak ("Please enter");
} else if (valid && reentry) {
   View.BackgroundColor = UIColor.Orange;
   Speak ("Welcome back");
} else {
   View.BackgroundColor = UIColor.Red;
   Speak ("Denied!");
}

The MonkeyScan github repo has been updated with this code.

Thursday 26 September 2013

iOS SpeechSynthesizer API with Xamarin: it talks!

Mike posted a neat code example today on adding the new iOS 7 AVSpeechSynthensizer API to a Xamarin app.

It's so easy, I added speech synthesis to the this TaskBoard to-do list example in about 5 lines of code. Now the app can read the to-do item back to you :) just by adding this code:

if (UIDevice.CurrentDevice.CheckSystemVersion (7, 0)) {
   SpeakButton.TouchUpInside += (sender, e) => {    // requires iOS 7
      Speak (TitleText.Text + ". " + NotesText.Text);
   };
}

and

void Speak (string text) {
   var speechSynthesizer = new AVSpeechSynthesizer ();

   var speechUtterance = new AVSpeechUtterance (text) {
      Rate = AVSpeechUtterance.MaximumSpeechRate/4,
      Voice = AVSpeechSynthesisVoice.FromLanguage ("en-AU"),
      Volume = 0.5f,
      PitchMultiplier = 1.0f
   };

   speechSynthesizer.SpeakUtterance (speechUtterance);
}

The UI now looks like this: touch the Speak button to hear the text read back to you.