Smart Home Artificial Intelligence

In 2012 we started a project to provide a natural language processing interface, to enable query and control whilst at home or away from our smart home. Since then we have been improving this interface, to add an Artificial Intelligence (AI) capability. This allows us to query and control every aspect of our smart home with 'whole house' context simply by having a conversation with it.

We have also built this interface to work with both text and voice. The voice capability is simply speech recognition in front of the text input service. If speech is used to interact with it, then the resulting text response is converted back to speech using text-to-speech synthesis. We have done this because speaking to things is not always convenient and the text chat is more private and works both locally and remotely.

Just because we designed an interface to enable natural language interaction, it doesn't mean we will always use it! We have designed our smart home to be intelligent and autonomous, requiring minimal user interaction. It just senses and works intelligently around the people within it to provide the best user experience possible and in many cases is zero touch.

We also see this as just one of many user interfaces. People user the interface one that works best for them and their current situation and this varies depending on the task to be performed. Sometime a switch just works better and is more efficient. sometimes it is a complex taks and an app works better. Sometimes voice control is the best answer. For those with physical impariments and visual impairments, voice control of things in the smart home is more often the best way to interact with things.

This project will continue for some time to come, as we continue to refine, improve and extend our smart home.

What Is Artificial Intelligence?

A common definition of artificial intelligence (Wikipedia) is:
"The theory and development of computer systems able to perform tasks that normally require human intelligence".

Why Take This Approach?

  • It provides a very simple user experience for anyone trusted to interact with our smart home.
  • It uses existing communication tools and methods that most people are already familiar with and it works on a wide range of devices.
  • It works with any communications protocol such as SMS, e-mail, XMPP, WebSockets, Twitter DMs, etc.
  • Although primarily text based, the underlying transport is multimedia capable and interactions involving images, audio and video are also within the scope of this project.
  • A natural language interface is very intuitive and requires no experience or training in order to use it. It requires no understanding of the technology and it can also assist people with requests and queries.
  • It is a very convenient and efficient method to interact with our smart home.
  • It can be a silent and unobtrusive way to interact with our smart home.
  • It is interface that works for anyone trusted with access, from any location.
  • It is an extremely powerful interface, enabling every aspect of our smart home to be queried and controlled quickly and easily.
  • Our smart home retains context through a conversation, so if you had just discussed an object (e.g. Conservatory Lamp), the simply command 'switch it off' would be understood and acted upon.
  • It enables both simple and very complex requests to be made upon our smart home, e.g. 'Turn all the outside lights off' or 'What just happened?'.
  • It does away with the need for custom apps with complex user interfaces and navigation models.
  • Our smart home models user availability (online presence) and can also initiate conversations to deliver alerts and notifications.
  • Our smart home AI engine can provide help and guidance if required.

Design Approach

We are building this AI engine in such a way that it could work in any home, we plan to use it in our next home after all. There is no 'hard coding' of features or capabilities. All the code we have written is generic in nature and applicable to any other smart home. It is all driven by the simple configuration files used to define our smart home and only these need to be changed in order for it to work in any other smart home. The bit that is fairly unique to our smart home is the technology abstraction, which enables it to physically connect to the specific hardware in our home.

Whilst this project is designed to enable artificial intelligence in our smart home, we have written software that is limited in scope and context. It is an example of Artificial Narrow Intelligence (ANI). It works really well within the scope of our smart home and its local environment but has no knowledge of things outside this domain. This constraint makes the implementation task much simpler and as we have shown here, fully achievable.

This is one just interface to our smart home and it is most definitely not the only. We have many other ways of interacting with our smart home and the components within it, such as dedicated switches, buttons, gesture control, mobile apps, web interfaces, etc.

To enable the AI capability we have provided our smart home with an identity. This enables the required trust relationships and enforces authentication (and identification) of users. Our smart home models people and associates roles and privileges with known individuals, which are used to modify behaviour, responses and permissions.

Assumptions

We have assumed that the AI needs to be a core part of our smart home. In order to understand what is being asked of it, our AI needs visibility of all aspects of it, so that it can use statistical techniques to identify the potential objects under discussion. Our modelling enables this and this model is fully exposed and used by our AI. This gives it full context and a real-time view of every sensor, device and all their associated attributes and values. So when we ask our smart home 'When was the front door closed?', it has the context and information to hand to be able to answer the question.

Implementation

We have written a lot of code in Java to implement the AI functionality. We also have also written classes that enables the communications and responses for each transport protocol used (e.g. XMPP, web sockets, etc.).

Our approach is based on three-step process:

  1. We use 3rd party services to convert speech to text if required. Typically this is the built-in speech recognition enabled by iOS and Android. We've have taken this approach for now because voice interaction is not our main focus (text interaction is) and companies like Google and Apple can throw resources at the challenge of doing accurate speech recognition that we can only dream about. Local speech recognition will the subject of a future project. For now we are focusing on dialogue with our smart home conducted using 'text chat'.
  2. To use semantic and contextual language analysis to extract as much context information as possible. This means that in effect we are teaching the AI engine to understand the English language. This is obviously an on-going (and potentially endless) process but it is already good enough for most of our needs (and the limited scope of this project). This part of the project could potentially have other applications but it means that our AI engine will only ever understand English. Someone could pick up our code and make it work with other languages but we don't have the time to do this. The context includes a probability of the request being a question and a command that requires action.
  3. We then use the extracted context to work out what action needs to be taken. This could be a query for an object or one of its attributes or it could be a command to change the state of an object or one of its attributes. As we have constrained the project scope to our smart home only, the set of outcomes can be represented with just a few functions/methods. It is this that makes the connection into the physical world very simple to achieve and our technology abstraction model simplifies this process even further.

This project is not using keyword spotting and hard-coded mapping of phrases to objects and actions. It is designed from the outset to work in any smart home. This is easily configured using some JSON configuration files that describe the zones, objects, their attributes, etc. What we have done is write code to model object types though and this is also an on-going part of the project.

As we add new hardware and think of more things we want it to do, we test the AI by sending it the new requests to see how well it understands them. We then iteratively improve the contextual analysis code and this is essentially how our AI engine learns.

Modelling

All of what is described here is achieved through modelling every aspect and element of our house as Java classes, with numerous attributes. This includes relevant context. This can only be realised because:

  1. Our Home Control System (HCS) has 'whole house' context and has visibility of all sensors, devices and events that occur in our smart home.
  2. The model and language used by our Home Control System (HCS) is both machine and human readable. This means that when a person references a physical device (e.g. Bathroom Towel Rail) our Home Control System (HCS) knows what we are talking about because that is the identifier it is using too.
  3. Our Home Control System (HCS) uses technology abstraction and is therefore technology agnostic. Having said that, the underlying technology must be capable of being modelled by one of our 'object types'.

We originally used XML to model our smart home and every aspect and element within it. This enabled our Home Control System (HCS) to then work in any home using a similar set of simple models. To better support some of the features required by this project, we then refined these models and made them even simpler. We have now moved away from XML to JSON. JSON is much easier to read and parse using the Java programming language.

Commands & Questions

Our AI uses an algorithm to determine if a command (requiring action) or question is being asked. This is linked to the set of context extracted and also the historical interaction and context.

Objects

Everything that can be controlled or queried in our home is modelled as an object. This model includes things like the object's name, zone, and other attributes such as whether it uses a controller or schedule.

Lockable attribute
Each object also has a set of attributes that can also be queried or controlled. These are defined based on object type but can be optional to each object. An example of this would be a door, which will have an attribute that indicates whether it is has a lock that can be controlled and queried. The 'Lock' attribute would have a value of 'Locked' or 'Unlocked'. Other attributes would be things like last opened, last closed, last locked, last unlocked (all time stamps).

Valid values
Our model also knows what are valid values for objects and their attributes. This means it can assist users in making valid requests and commands. This example also shows persistent context being retained through the conversation.

Groups Of Objects

Our AI engine doesn't just work with one object at a time. It also models the concept of groups or lists and apply the requested action to these groups. A simple example might be: "Which downstairs lights are on?. This will result in a group being created and a further command such as: "Turn them off" would result in the group of matching lights all being turned off.

Object Types

Everything connected to our smart home has an object type and this includes the 90+ IP-networked devices that connect to our home network, which is also monitored. The main objects though are the sensors that can be queried and the devices that can be both queried and controlled. There are well over 100 of these in our home at present but the number grows every month.

We currently have over 30 types of object in our smart home but this is also a growing list. For each object type we also model its characteristics, allowable values, etc. This means that once we have done this, we can simply add any number of objects of this type to own home, by simply just adding it into the configuration file.

Object types are not a simple one to one mapping with objects. We have 'lights' that are also a subset of 'dimmable lights' and these in turn are a subset of 'colour changing lights'.

These are just a few examples of the many object types supported:

Battery

The 'Battery' object models batteries in wireless devices and things like our 12V UPS. This is a percentage between 0% and 100%. Each instance has a minimum level attribute and once reach the object generates an alarm. This is a good example of an object type that knows it can only be queried and not controlled.

Camera

The various IP-networked cameras around our home are modelled, as is there attributes such as ability to capture still images and video.

Dimmer

This is an extension of the 'Light' object. As well as having a state of on or off it remembers the light level (0 to 100%) so that when it is switched back on it uses the previously set brightness level. This is an object type that can be both queried and controlled.

Door

The 'Door' object type is for modelling doors, gates, etc. It supports attributes to generate an alarm if the object has been in a given state (e.g. left open) for the specified time period. Some doors are also lockable remotely. Our model and the AI engine have knowledge of this, so if you try to lock a door or query a door that doesn't have a lock it will let you know that this can't be done.

We have another object type planned for the automated (opening and closing) doors that will be part of our next smart home.

Fan

The 'Fan' object exists to model binary fans (which are On or Off). We have another object to model variable speed and multi-speed fans.

Service

Twitter service control
Our smart home models services and provides query and control of the them. Our smart home's Twitter account is one of these services.

Smoke

The 'Smoke' object is used to model the dumb smoke sensors (project) and smoke alarms (project) that we have interfaced to our smart home.

Temperatures & Thermostats

The temperature object models temperature sensors in our home and these provide positive and negative floating point numeric values. These also have individual maximum and minimum threshold values that generate alarms when crossed. In addition our model tracks rate of change on a per sensor basis (the temperature changes occurring in our garage and bathrooms are very different to the other rooms) in order to raise an alarm when the temperature rises at an excessive rate.

'Temperature' is an interesting object type because the word also has meaning to other object types. It therefore has a 'Thermostat' attribute which is used to set temperature in zones or rooms. This is essentially a manual over-ride to the intelligent controllers already in use though.

We have taken this approach because out next smart home will have zone-level climate control and there is no traditional or even smart thermostat required. It is all managed by our Home Control System (HCS). Our model still works with a single thermostat in a home if required though.

Locations & Distance

As some of above examples show, our AI engine models locations and has awareness of them. It models frequented locations and short-cuts or aliases used, such as 'work' and 'home'. It also learns and builds upon the list of known locations over time. It also understands distance and distances between locations. It also models more abstract location concepts such as 'upstairs' and 'downstairs'. These have valuable meaning and context, particularly when used in home automation and control.

Zones

Zones are an essential part of any smart home. If you are not modelling zones in detail, then we would question whether you really have a smart home at all.

Our zone model is extremely powerful and an integral part of the whole house context available to the AI engine. In models the relationship between zones, including nesting of zones (e.g. the 'Ensuite Bathroom' zone sits within the 'Main Bedroom' zone, which sits within the 'Upstairs' zone, which sits within the 'House' zone). From this is can know that the house is occupied if any of the zones within it are occupied.

The model also knows about objects that form the entry and exit points to any zone, such as doors or beam-break sensors. This means the zone context includes whether the zone is 'open' or 'closed' and thus contains zone occupancy and presence information. It also enables people tracking and counting. As well as this spacial information, it knows about objects within each zone and time context data. This means that interacting with objects in a zone can update occupancy and presence for that zone.

All of this context is available to our AI engine and can be used to control and query our smart home.

Interfaces

We have not limtied our AI to one interfce or one type of interface. We need it to be flexible and support interfaction by text and voice, both local and remote. A big part of this project is in finding out what interfaces and protocols work best, in terms of the user experience, reliability and performance.

Amazon Echo

We are currently working on a bespoke skill to enable full control of our smart home via the Amazon Echo. This is far from ideal as the Amazon Echo doesn't identify the user, support authentication or work remotely (Amazon Echo remote aside).

Apple iMessage

We are still waiting for Apple to provide an APi to enable this. It was promised many years ago and is unlikely to happen now :-( This would be our ideal way to interact with our smart home AI as it is free, works locally and remotely, is authenticated and supports full multimedia interaction.

SMS

Our smart home has 3G/4G fail-over and the ability to send and receive text messages. This capability is used by the AI engine and for alerting. This is limited to text only for now (i.e. we are not planning to use MMS).

Twitter Direct Messages

As our smart is already on Twitter it makes sense to use the Twitter direct messages capability as an interface to enable dialogue between our home and trusted users. This is quite simple to do and is something we are currently working on. A demonstration video or two will follow very shortly.

Web

We use an HTTP interface to test and develop our AI capability. The Google Chrome web speech kit enables full voice interaction.

XMPP

We started this project with XMPP support and this is a simple and easy way to have a conversation with our smart home. It does support multimedia communications and is authenticated. It also works locally and remotely. We are only using it for text-based communications.

The Market Place

Google & Nest

Google have started to hint at delivering this sort of functionality recently (Dec 2014) but with such little context to use, it is never going to provide the kind of capabilities we have in mind - Google works with Nest.

JOSH

JOSH is perhaps the closest to meeting this projects objectives. We will be tracking progress of this project closely.

Mycroft

Mycroft is a Kickstarter project based around a Raspberry Pi, to provide a physical device and interface.

Summary

This project is enabling us to query and control all of the many networked devices, sensors and appliances in our smart home (nearly 200 in total now) with a very simple and intuitive user interface that anyone could use. It requires no learning and no prior knowledge of our home.

It works from any location and we envisage it primarily being used whilst away from home. It works with any transport protocol and we have focussed our initial efforts on XMPP, SMS (text messaging) and WebSockets. Ideally we would like to use Apple's Siri as an interface but we can find no easy way to enable this at present.

We think that Artificial Intelligence (AI) is the future of the smart home. Much like our smart home, our artificial intelligence engine will evolve and progress over time as we think of new things to add and new ways to extend its understanding and capabilities. We started out with a very simple text interface that understood a few basic commands but we now have a very smart, context sensitive home that can interact in many ways and respond intelligently to both simple and complex requests.

This has been a fun project to do and I like coding. This alone is a good enough reason to do it :-)

The Future ...

Don't worry. We have no plans to get to this level of intelligence :-)

HAL 9000: "I'm sorry Dave, I'm afraid I can't do that" - An excerpt from the 1968 film "2001: A Space Odyssey" directed by Stanley Kubrick.

Amazon Echo

We now have an Amazon Echo device and are in the process of connecting it to our AI engine.

Demonstrations

A short video clip of our smart home AI in action:

We are currently recording a series of videos that show the various features exposed. These are being presented by our daughter. The following examples show what our smart home can do and what will be demonstrated in these videos:

People & Personalisation

Our smart home recognises known people and provides a personalised experience for them. It will also work for guests trusted with access and provides a limited set of capabilities for them (e.g. guests can't turn the alarm on or off). Examples:

Hi
Hello Ethan.

Is Emma at home?
No, Emma is not at home.

Time, Time Periods & Durations

Our smart home recognises knows what time it is and understands the concept of morning, afternoon, evening, night and day. It also knows if we are 'up up'. It also understands concepts like yesterday, today, tomorrow and future events. It can also understand and handle requests involving time durations. Examples:

What time was sunrise yesterday?
Sunset yesterday was at 06:17.

What time is sunset today?
Sunset today is at 17:21.

What is the weather forecast for tomorrow?
The weather forecast for tomorrow is ...

Context

When our smart home is asked something it extracts as much context as possible, so that it has as much information as possible to work with. This is combine with our smart home's 'whole house' context to provide knowledge required by the artificial intelligence.

Context persistence
Some context has persistence so that a conversation can be held with context retained through several requests. A clever algorithm is used to work out which context is useful or relevant and needs to be retained through the conversation.

The context also includes any identified actions requested.

Mode & Status

The current mode and house status of our smart home changes the way it behaves. Trusted people can set the mode and our smart home will then determine its working status, based on a wide range of factors (occupancy, day of the week, etc.). It then alters its behaviour accordingly. Typically our smart home runs in automatic (Auto) mode, so that there is no need for us to keep tell it that we are in, out or away on holiday. Examples:

Status out
Please set the Mode to change the Status Rob.

mode?
The Mode is set to Auto.

Mode = in
The Mode is now set to In.

set the mode to in
The Mode is already set to In.

Auto
The Mode is now set to Auto.

Status
The Status is set to In.

Security, Access Control & Cameras

Alarm control
This video will show the key security and access control features features such as querying and controlling doors and smart locks. It will cover the alarm system and its features. It will also show the multimedia capabilities of our smart home AI in controlling and viewing still images and video clips from security cameras.

Lighting

The majority of the lighting in our home is intelligently automated, providing a 'zero touch' experience. Our smart home just knows when to turn lights on and off. It also supports full manual control via numerous interfaces and our artificial intelligence supports full control of binary, dimmable and colour changing lights.

Dimmable lights example
This video shows how we can control basic lighting, dimmable lighting and colour changing lights.

Binary and dimmable lights:

Are the study lamp on?
Yes, the Study Lamp is On.

Set it to 75%
The Study Lamp is now at 75%.

Colour changing (RGB and RGBW) lights:

Set the study lamp to 50%
The Study Lamp is at 50%.

Make it light green
The Study Lamp is now Light Green.

Now red
The Study Lamp is now Red.

Set all the upstairs lights to warm white
The upstairs lights are now Warm White.

turn off all the downstairs lights
The Downstairs lights are now Off.

Heating, Cooling & Hot Water

Our smart home controls heating, cooling and hot water intelligently to make sure it is a 'zero touch' experience and just works. Our smart home is capable of zone-level control and uses schedules, detected occupancy and location tracking to improve comfort and efficiency. It will also learn from user input. Our smart home implements numerous 'smart thermostats' in software and these work with 'whole house' context.

This video shows the extensive manual control and user input that works with our smart home.

I'm too cold
I will adjust the temperature in the Study.

Turn on the hot water for 60 minutes
The hot water is now On for 60 minutes.

Set the lounge temperature to 23 degrees for the next 3 hours
The Lounge temperature is set to 23°C for the next 3 hours.

Objects

Every sensor and smart device in our home is modelled as an object. There are currently about 200 of them in our home. Every single one can be queried or controlled via this AI interface and this includes all their attributes too. Examples:

Is the study door closed?
No, the Study Door is open.

What is the lounge temperature?
The Lounge Temperature is 21.7C.

When was the front door last closed?
5:43pm on Thursday the 7th of January 2016.

Is the PS4 on?
Yes, the Sony Playstation 4 is On.

This video will also show how our smart home understands types of objects. Every object in our smart home also has an associated object type. We currently have over 30 types of object in our smart home but this is a growing list. For each object type we also model characteristics, attributes and their allowed values. This means that once we have done this, we can simply add any number of objects of this type to our home and they all inherit its capabilities. Object types are not a simple one to one mapping with objects. We have lights that are also a subset of dimmable lights and these in turn are a subset of colour changing lights.

Our smart home AI doesn't just work with one object at a time. It also supports the concepts of groups and lists and can apply a requested action to these.

which downstairs lights are on?
The Lounge Lamp, Conservatory Lamp and Kitchen Worktop Lights are On.

Switch them off
They are now Off.

Presence, Occupancy & Location

This video will show how our smart home models family members (also guests and other people), their presence and location. It will show how it uses this information to alter its behaviour and predict future events and needs. This is all achievable because we track presence, occupancy & proximity to zone level.

Examples:

  • Is anyone at home?
  • Is there anyone in the garden?
  • Is Rob at home?
  • Where is Emma?
  • When was someone last in the garage?
  • When will Kim be home?

This video will also show how our smart home track people's location and understands distances.

Notifications

Notification example
Our smart home can proactively send us notifications. These can easily be switched on or off. We can also request notifications for any object and valid value combination. These are also associated with users, so all family members can set their own notifications. Because people are also objects modelled in our smart home, it is possible to set a notification such as "Let me know when Rob gets home". Because we also model locations this could equally be 'work' (or some other location) and 'work' is also specific for each person.

Zones

Our smart home models each room or space as a zones and it currently has about 25 zones. Our zone model supports a hierarchy of nested zones, so (for example) we have a Main Ensuite zone within the Main Bedroom zone, which is within the Upstairs zone, which is within the House zone. Each zone has numerous attributes that can be queried and controlled. These include all exits (doors and beam break sensors), occupancy, presence and neighbouring zones.

Is the study occupied?
Yes, Rob is in the Study.

How many people are in the house?
There are 3 people in the house.

Scenes

This video will show how we use scenes in our smart home for both complex and repetitive tasks. Examples of scenes:

Good morning

good night

Safety & Piece Of Mind

This video will show how our smart home monitors many things to improve safety and security. It will show how notifications and risk assessments are made, along with methods of discovering what is currently happening and what has happened in the past. Examples:

What just happened?

What has happened recently?

Is there anything to report from last night?

Is there anything to report from last night?

Show me who visited yesterday.

Our smart home can also provide pre-emptive warnings and intelligently deliver them at a timely fashion:

It will be icy outside this morning Rob.

Lightning has been detected.

There have been 7 lightning strikes in the last 15 minutes.

Our smart home and AI knows about the mains power supply.

  • Is the mains power on?
  • Has the mains power failed? When did it last fail?
  • How long was the mains power off for?
  • When was it restored?

Networks & Networked Devices

As our smart home monitors our home network and also looks for Bluetooth devices, it improves security and track physical proximity of devices. Our smart home also monitors our Internet connections and their performance. This video will show some of the advanced features enabled by these capabilities. Examples:

Is the Internet up?
Yes, the Internet connection is working.

Has the Internet connection failed today?
No, the Internet connection last failed 43 days ago.

What is the Internet speed?
The uplink speed is 19.4Mbps and the downlink speed is 79.8Mbps.

Weather

Weather
This video will show how our smart home uses a local weather station, local air quality monitoring, Internet data feeds and public IoT sensors to monitor local weather conditions and adapt its behaviour accordingly. It will also show how our smart home proactively improves our safety through monitoring and alerting family members of potential dangers.

We already have intelligent voice announcements for rain sensing but it is possible set notifications on any weather event. Our smart home has a large number of local weather station and also includes a weather service that collates relevant (local) data feeds and IoT sensor data. Examples:

Let me know when it starts raining
I will let you know Rob.

Telecare

This video will show how our smart home inherently supports advanced Telecare functionality could be deployed into the homes of the elderly and vulnerable, to provide remote monitoring and confidence that they are well. We have also written an article on this subject. Examples:

Is anyone up yet?
Yes, someone is up.

Has anyone been in the kitchen this afternoon?
No, no one has been into the kitchen today.

Have their been any visitors?
Yes, there have been 3 visitors today.

How many people are in the house?
There are currently 2 people in the house.

What Is Different About Our Approach?

The primary difference between what we are doing and the 3rd party services that we currently see in the market place is that our Home Control System (HCS) has real-time visibility and full context of all of the 200+ connected sensors and devices in our smart home, including all associated attributes and values. It models 30+ object types and the 90+ IP-networked devices in our home. It also has access to the model we use for people, rooms/zones and the relationships between them. This is information we would never share outside of our home or with 3rd party service providers.

This enables us to ask questions of our AI, that 3rd party services simply couldn't answer (unless they too provide a 'whole house' solution), e.g. "Which lights are on upstairs?". In order to answer questions like this, it needs to know about all of the connected lights (regardless of brand), which zones they are located in and whether those zones are within the parent domain that models 'upstairs'. It also needs to know their current state and possible states (e.g. is it a binary light, dimmable or colour changing. To achieve this we use a model that supports technology abstraction and integrates and supports all of the smart home technologies and products used in our home. This is achieved through intelligent selection of products or development of hardware, combined with hard work to enable this level of integration.

Further Reading

Twitter

This is a widget showing recent tweets by our intelligent smart home. It provides some good insight into what our smart home can do and exposes some of the many sensors and devices (and their attributes) that can be queried and controlled:

Phase 2

In a later phase of this project we aim to add capability to support the following functions and features:

Communications

Enable our smart home to initiate, answer and manage IP-based communications. We also see our smart home acting as a personal assistant to screen calls and filter inbound communications to the right family members.

Energy & Efficiency

How our smart home monitors devices and tracks energy usage, providing pro-active guidance and initiating conversations on this subject. Examples:

You have used x KWH of electricity today.

The Study Light has been on for 11 hours. Would you like me to turn it off?

Entertainment & Fun

The focus of attention at the moment is in understanding what devices and services can be controlled in this manner and whether this is actually the best way to do it. Most of the music played in our home is played directly from Smartphones using AirPlay and Bluetooth.

The main emphasis is on the lounge and providing 'universal remote' type functionality, e.g. "Watch YouView" will turn on the TV, YouView STB and home cinema amp and select the right inputs on these devices.

Whilst many things are technically possible, the question has to asked as to whether these are good things to do given that devices like our Logitech Harmony One universal remote already provides a very simple and efficient user experience.

Our smart home is already providing personalised content recommendations to family members.

Share ...
We are on ...
Facebook Twitter
YouTube Flickr Follow us on Pinterest