How does Syntax, Semantics and Pragmatics apply to everything?

Before we begin, let’s share our understanding about Syntactics, Semantics and Pragmatics with a simple refresher; broadly speaking, there are 3 kinds of ellipsis:
1) Syntax: Programming languages have constructs with which you write software to get machine to do something exactly how you want it. When we grew up, our parents told us how to ‘sit’ or ‘sleep’ or ‘stand’ – and so we did. And, we learned.
2) Semantics: Learning how to write software comes with the realization that there is more than way to tell the machine to do something exactly how you want it. Granted the programming constructs (and syntax) exist, but the way we craft commands differ from person to person. As we grew up, we learned that there is more than 1 way to sit, stand and sleep and more than once place to do it. And, we evolved our application of learning.
3) Pragmatics: As we started learning more than 1 programming language, we started to think about context and how to get what we want in a good way rather than focus on syntax itself. As we grew up, we started to think why we sit or sleep and realized that ‘rest/relaxation’ is what we want and there are many ways to get ‘rest’ and it depends on where we are and who we are with. And, we started thinking less about what we say/do and more about outcomes we want to drive in that context.
How important is context to understand meaning

Does the image above and the question it poses make more sense with the image below?
How important is context to understand meaning - part 2

Hopefully, by now, you’ve begun to see with the first section that ellipsis is not limited to study of languages; it applies to learning and basically, anything we do in life. Specifically, I wanted to apply this thinking to Conversational UI design. The challenge as I see it is exists in 3 areas (broadly)

1) Natural Language Processing: How do we build a UI, Pre-parser and Parser that enables people to speak freely with the UI.
2) Information retrieval: Once we know what the ask is, how do we extract the right answer from structured and unstructured data sources?
3) Machine Learning: How to be enable machines to learn (supervised and unsupervised) from each interaction so that machines are enabled to think about context based conversations? Note that a neural net is only learning supervised if the output is already known. Desirably, while learning, the machine adds each input to the net’s input layer.

 

If we take a step back and think about how are constructing a conversational UI to enable meaningful conversations, we will see that the principles remain consistent across construction of Visual UI to enable meaningful visual experiences. As humanity realizes that there are other ways (better?) to get to the same outcome that swiping a screen gives me that does not involve a screen, will our technology mature enough to enable such interactions?

Enable meaningful conversations using Conversational UI

I’ve begun a conversation here on how organisations and Consultancies should prepare for delivering new experiences from a skills perspective. I think this is an equally important conversation to have now so that some organisations don’t find ourselves behind the curve like they did with Digital.

Leave a Reply

Your email address will not be published. Required fields are marked *