Alexa Slack



Slack took the business world by storm over the past four years. Alexa has taken consumer households by storm over the past three years. It’s only natural that someone would determine how to unite them. The voice application monitoring and developer tools company Bespoken says you can now, “speak to Alexa without talking,” by using its Silent Echo bot in your favorite Slack Group.

See what Alexa Slack (beyougirll) has discovered on Pinterest, the world's biggest collection of ideas. Slack is designed to work with your internally built solutions and processes. In fact, every week people use over 650,000 custom apps. You too can create your own solutions, with or without code. Workflow Builder Turn routine processes into automated workflows that keep work moving forward. A recent story from Bloomberg speculated about an Amazon acquisition of Slack.There were some good points about how it’d be a good strategic move for Amazon to get into the messaging space. A few interesting things come out of thinking about how these two platforms could work together: Alexa needs text input. Google Assistant allows for this. Many Amazon Alexa users report that, from time to time, Alexa stops playing music. One of the most popular features of Alexa devices is the ability to play streaming music from TuneIn, Apple Music, Pandora, and Spotify. They can also stream music from favorite playlists or artists.

The solution is getting some notice too. It was featured on Product Hunt earlier this week and has a number of favorable comments and upvotes.

Alexa Slack

Use Alexa by Typing for the First Time

Alexa slack channel

Amazon Echo and its Alexa voice assistant were delivered as a voice-first and voice-only solution. While you can now do some basic messaging by text that Alexa will transform into speech (i.e. if you use the Alexa app) and deliver visual images on Echo Show, you must interact with skills by voice. Silent Echo changes this. You can actually interact with many Alexa skills by text input and text output while in Slack or on a webpage.

In a blog post last week, Bespoken founder and CEO John Kelvie outlined some things that Silent Echo can do:

  • Control smart home devices
  • Interact with and test skills – including card output
  • Use it when you need to be very quiet
  • Or use it when things are very noisy (too noisy for Alexa to hear)

After your Slack administrator adds Silent Echo to your group, you can direct message with it or add it to conversations in channels. For channel conversations, Silent Echo is a “generic” Alexa bot. However, to DM Silent Echo you actually link it to your own Amazon account to provide a personalized Alexa experience.

Experience Alexa Through a WebPage

You can also interact with Alexa by typing on a Bespoken hosted webpage. No need to wait to get it in your Slack group. Silent Echo is easy to connect to your Amazon Alexa account and I was able to control smart home devices immediately, check the weather and access a few skills. You will even get cards delivered to your Alexa app showing what you have done.

By the way, the default location is Seattle, Washington. Bespoken is actually creating a new Echo device, registered to you, and it requires your location by zip code or street address to use geography-based features. You can do that by opening your Alexa app, clicking the hamburger menu in the upper left and then selecting menu>[username]’s Silent Echo > Device location Edit.

Some Limitations to Know About

There are some limitations to Silent Echo. You cannot use music services like Spotify or stream radio stations. Also, it is not particularly good with skills that require multiple interactions in sequence. The time-out used by Alexa in its standard processing means these types of skills often shut down before you can type your response. Kelvie says this is expected.

Alexa skills today are not designed to wait for someone to type a response. At some point when multi-modal input by text is commonplace, there will be tools to address this. Today, it is a limitation of what is really a proof-of-concept for text-based interaction with Alexa.

First a Bot, Now an SDK

Bespoken followed the web-based and Slack Silent Echo bots with an SDK for developers. When asked about the purpose of an SDK, Kelvie replied:

The SDK enables lots of use cases. More bots to interact with Alexa without voice input. Unit testing that evaluates the behavior of skills programmatically – it’s simple to write tests that compare the text coming in and out of Alexa. And, validation – a sort of deep testing that goes beyond simple input/output and can exercise skills across a multitude of dimensions such as accents, languages, invocation patterns and the like.

We put out the initial UI to show the possibilities. The UI for the web and Slack is a bit of sizzle, but the SDK is the steak. We see a lot of potential for what we and others can do with it. We plan to tap into the SDK over the next few months in a variety of ways, so stay tuned.

So, its not a full Alexa experience, but Silent Echo does provide some Alexa access through typing when using Slack or the web. That will be fun and occasionally useful for consumers when speaking is inconvenient. The SDK is pure tooling to help developers build better bots.

Slack

Alexa is Amazon’s voice-controlled personal assistant. It’s super convenient and so you might want to create your own skills as well. You’ll be surprised how easy this is can be with the help of Microsoft Azure Logic Apps.

Here’s our sample project: Let’s say you share a family Slack with your spouse and children. Whenever food is ready or the rabbits need to be fed, you want to send a Slack message to your children. This should be as easy as saying:

Alexa, tell my children: food is ready.
Alexa, tell my children to feed the bunnies.

We’ll do all of this with zero lines of code, in less than 15 minutes. Ready, set, go!

Creating a New Skill for Alexa

First of all, you need to create a new skill in the Amazon Apps & Services Developer Portal for Alexa. You might have to accept the Amazon developer ToCs before you can access the developer console. In the portal, select Alexa Skills Kit. Click the orange Add a New Skill button in order to create a new one. Next, you’ll see the following form:

Here, select Custom Interaction Model as the Skill Type, as we’re neither developing a smart home, flash briefing nor video skill. Next, select a language of your choice. You can add additional languages later on. Then, name your skill. In this example, we’re using Family Bot. Then you can specify an invocation name. Here, I used “my children,” so we can invoke our skill with “Alexa, tell/ask/… my children to…” later on. If you intend to publish the skill at a later point in time, make sure to check the Invocation Name Guidelines. As the skill won’t use audio player, video app or render template directives, we’ll set all the Global Fields to No. Now, click Save followed the Next button.

Next, we’ll create the intents which our app supports. I’ll use the Skill Builder Beta in this sample. In order to open it, click the black Launch Skill Builder BETA button on the Interaction Model page. On the left, you can see the three default intents (cancel, stop, help). In our use case, we’d like to support two intents: Feed the bunnies and food is ready. Create a new intent by clicking the Add label next to the Intents menu entry (1). On the right, choose a name (here we use RabbitIntent, 2). Next, click Create Intent (3). Repeat steps 1–3 to create the FoodIntent.

Furthermore, we have to define some sample utterances the user can say to invoke the selected intent. For the RabbitIntent, we’d like to invoke it with “Alexa, tell my children to feed the bunnies.” Hence, select the RabbitIntent from the intent list, type this sample utterance into the text field and make sure to add it by clicking the plus button or pressing the return key (1). You can add more utterances if you like. Repeat this step for the FoodIntent. Finally, make sure to save the model (2) and build it (3). Building the model can take up to 2–5 minutes.

Creating a Service Endpoint with Azure Logic App

In 2017, “server-less” was one of the big buzzwords in cloud computing. Server-less compute services are completely unaware of their underlying resources. It doesn’t matter if they are running on Windows or Linux, ARM or x86 machines, smartphones or root servers, on Apache or IIS. The cloud provider manages the underlying hosting infrastructure up to the server. Hence, they are extremely cheap. Microsoft Azure offers three services for server-less computing: Azure Functions, Event Grid and Logic Apps.

Azure Logic Apps are comparable to the popular service IFTTT (if this than that), but offer far more complex workflows and individualisation. Comparable to software like LabVIEW or Windows Workflow Foundation, the developer can click their workflows together using a graphical user interface.

In addition, Azure Logic Apps provide a whole bunch of connectors which allow integrating with third-party services such as Facebook, Eventbrite, Adobe Creative Cloud, Harvest, Office 365 and many more. All you need to do is to add the action to your Logic App workflow and authenticate with the selected service.

Next, we’ll create a new Logic App in the Azure portal (click here to do so). If you don’t have a Microsoft Azure account yet, there’s a free 30-day trial with a $200 (170 €) budget which you can sign up for here.

Choose a name, resource group and location for your new Logic App, hit the Create button and wait a few seconds. After the deployment is complete, a notification window appears. Click Go to Resource.

Next, you can choose from different Logic App templates to start with. Alexa communicates with other services via HTTP and JSON-based requests and responses. Hence, choosing the HTTP Request-Response template seems to be a reasonable choice. Click the entry in the list and the Logic App Designer opens.

Here, you can see that the Logic App was initialized with two actions (When a HTTP request is received and Response). Logic Apps are charged per action and connector execution.

Alexa parses the spoken request and calls a given service endpoint thereafter. Alexa’s request and the service’s response payload are both formatted in JSON. Alexa’s Request Types Reference shows a sample IntentRequest JSON. Here’s a reduced version of it only containing the relevant information for our use case:

The Logic App action When a HTTP request is received allows specifying a JSON schema. The action is capable of parsing the request payload and makes the single fields specified in the JSON schema accessible to subsequent actions.

The schema can be generated from a sample request payload, such as the short IntentRequest from above. In order to do so, click the Edit label next to Using the default values for the parameters and then click Use sample payload to generate schema. A dialog opens where you can paste the sample JSON payload. You might use the reduced sample from above. Click Done to let the designer generate the JSON schema for you.

After that, create a new condition by clicking the plus sign between the request and response action and selecting Add a condition from the drop-down menu. Here, we want to make sure that we’re only responding to IntentRequests. If the request is of another type, such as a LaunchRequest (i.e. when the user says “Alexa, open my family”), we won’t post a Slack message. To do so, click the left value field and choose the type field parsed from the request JSON. Select the is equal to comparison and type IntentRequest into the right value field.

The condition introduces two branches: An if true and an if false branch. We’ll leave the “false” branch aside and only look at the “true” branch. Sure, you can further extend your Logic App later on. In the “true” branch, we’ll add a new switch case block. To do so, click the More button within the branch’s block and select Add a switch case from the drop-down menu. Here, pick the name field that was parsed from the request JSON. The switch case introduces two additional branches within the “true” branch, a case for a specific value of the name field and the default case. For simplicity’s sake, we’ll use the first case to react to the FoodIntent and the default case (implicitly) for the RabbitIntent. Again: You can extend this scenario later on and introduce a new case for the FoodIntent by clicking the plus sign between the first and default case.

Now, we can make sure that Alexa’s request is due to launching an intent and distinguish between the food and rabbit intents. This allows us to respond to both intents accordingly by sending an appropriate Slack message. Azure Logic Apps even provide a connector for Slack that could be easily added as an action. The Slack connector however requests access to all the channels the user can read, which sounds a bit oversized for our use case. So instead, we’ll use web hooks, a built-in Slack feature.

Integrating the Azure Logic App with Slack

In order to configure a web hook for Slack, you have to open your workspace’s App Directory. You can access it by clicking your team name in the Slack app and select Manage Apps from the drop-down menu.

Then, click Custom Integrations (1) and Incoming WebHooks (2). Make sure that your Slack workspace’s administrator has enabled them. For this sample, we’ll add two incoming web hooks, one handling rabbit messages, the other handling food messages. To do so, click Add Configuration.

Next, select a channel where messages should be posted to. For instance, you could use your #general channel, create a new one, or post the message to a private conversation. Then, click Add Incoming WebHooks integration.

On the next page, you can find the web hook’s URL. Furthermore, you can configure how messages posted using this web hook should appear in the conversation history. For instance, you could name your hook for the rabbit messages rabbitbot and could assign it the :rabbit2: emoji as an icon. 🐇

When you are done, click Save Settings and remember the web hook URL. Repeat the steps from above for the food web hook. Then, switch back to your Logic App.

Connecting Web Hooks and Logic App

Within the case for the FoodIntent, add a new action by clicking the Add an action button inside its block. Click the HTTP button in the first line and select the HTTP – HTTP action. This action can be used to send arbitrary HTTP requests, for instance to trigger the web hooks we’ve just created.

Alexa Slack

Web hooks are triggered by POST requests. Hence, select POST as the HTTP method from the drop-down list in the first line of the resulting action. Paste the URL of the web hook created for food messages. In the Headers section, make sure to specify application/json as the Content-Type for the request.

Finally, specify the payload of the HTTP request—the Slack message you’d like to post. The Slack API documentation has more information on how to configure the Slack messages for web hooks in JSON. For instance, the mention of the user christianliebel has to be surrounded by angle brackets. Finally, we add the :pizza: emoji which makes our message a lot friendlier. 🍕

Repeat these steps for the RabbitIntent, but make sure to use the other web hook URL here. Also, don’t forget to make use of a rabbit emoji.

Slacker

Configure an Answer for Alexa

Sure enough, Alexa’s HTTP request deserves a response. You might again refer to Alexa’s request and response JSON reference which also includes samples for responses. App store for mac free download. Here’s a minimal version:

To configure the response, navigate to the last action in the workflow, the Response.

Set 200 as the status code, so Alexa knows that everything was fine. Sure, you might want to introduce error handling later on. In the Headers section, make sure to set the Content-Type to application/json and paste the response payload from above.

As a result, Alexa will respond Sure thing! to all requests, including LaunchRequests. You might adjust the response text later on, for instance using variables that can be set just like any other action and read from by adding them to the JSON payload by clicking Add dynamic content.

Now click Save in the designer’s toolbar, go back to the first action in the workflow and copy the logic app’s URL that was just generated.

Connecting Logic App and Alexa Skill

Next, we’d like to connect our Logic App to the Alexa skill. Hence, switch back to the Alexa Skill Builder and click Configuration.

Then, you’ll see the form shown above. As Service Endpoint Type, select HTTPS. Per default, Amazon selects it’s own server-less compute service AWS Lambda. Using HTTPS, we can enter an arbitrary URL where Alexa requests will be POST-ed to. In the text field Default, you can paste the logic app’s URL here. If you like, you can provide different geographical region endpoints. For our use case, we only target a single region and hence leave this option set to No. As our skill is a completely personal skill and not intended for redistribution, we’ll also keep the Account Linking option set to No. Also, our skill doesn’t need any special resources or capabilities, so we don’t have to check any of the Permissions check boxes. Click Save and proceed with Next.

On the next page, you have to specify which certificate will be used for the default endpoint. As we’re using the Azure Logic App endpoint here, we can select the second option. Azure already secures the endpoint using a wildcard certificate, there’s nothing more to do from our side. Click Save and Next.

Now we’re almost ready to try out our skill on Amazon Echo or any other Alexa-enabled device. On the Test page, click the toggle switch. Once the skill is enabled, it’s activated for testing on your account. You can see the skill in the Your Skills section of the Amazon Alexa app. If you don’t have an Alexa-enabled device, you can also use the Test Simulator Beta to test your skill (black button).

Et voilà!

That’s it! Now just say your invocations…

Alexa, tell my children to feed the bunnies.
Alexa, tell my children: food is ready.

Alexa Design Slack

…and Alexa will trigger your Logic App
…which will invoke the Slack web hook
…and here’s the result:

Best Alexa Skills 2017

Pretty easy, huh? That’s it in 15 minutes time, without a single line of code!