{"id":5028,"date":"2021-07-29T11:47:07","date_gmt":"2021-07-29T06:17:07","guid":{"rendered":"https:\/\/blog.guvi.in\/?p=5028"},"modified":"2025-10-22T13:04:42","modified_gmt":"2025-10-22T07:34:42","slug":"build-your-personal-voice-assistant","status":"publish","type":"post","link":"https:\/\/www.guvi.in\/blog\/build-your-personal-voice-assistant\/","title":{"rendered":"Build your own personal voice assistant like Siri, Alexa using Python"},"content":{"rendered":"\n<p class=\"has-drop-cap\"><span style=\"font-weight: 400;\">The world we see today is a sci-fi utopian, bending technology, design, and nature together in a way that is harmonious and seamless. We effortlessly use a plethora of technologies. For instance, could you imagine a decade ago that you would be able to talk to a phone, console, or speaker and it would perform the task scripts with only your voice commands and no action on your part? Although we still can&#8217;t seek companionship and fall in love with our AI\/Operating System as shown in the movie &#8216;Her&#8221; but voice assistant has come a long way, killing all needs for computer peripherals. In this blog, we will decode the history of virtual assistants and how you can program &amp; build your own AI personal virtual assistant using Python.&nbsp;<\/span><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400;\"><strong>From Shoebox to Smart Speakers<\/strong>&nbsp;<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400;\">For those who don&#8217;t know, an AI virtual assistant is a piece of software that understands written or verbal commands and completes tasks assigned by the user. The first attempts for voice assistants can be traced back to the early &#8217;60s when IBM introduced the first digital speech recognition tool. While very primitive, it did recognize 16 words and 9 digits. The next breakthrough is in the &#8217;90s when Dragon launched the very first software product that led the way with competent voice recognition and transcription.&nbsp;<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400;\">Virtual assistants become mainstream when Apple introduced Siri in Feb 2010 with integration in iPhone 4S. The team used a mixture of <a href=\"https:\/\/www.guvi.in\/blog\/must-know-nlp-hacks-for-beginners\/\" target=\"_blank\" rel=\"noreferrer noopener\">Natural Language Processing<\/a> and speech recognition to drive the virtual assistant innovation. Siri was trained to initiate after a wake-up phrase &#8220;Hey Siri&#8221;, a user could then ask a question, for instance, &#8220;What&#8217;s the weather like in Chennai today?&#8221;. These texts were then passed to NLP software to interpret. After Siri, Google Now, and Microsoft&#8217;s Cortana soon followed the trend.&nbsp;<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400;\">The next milestone was later achieved by Amazon&#8217;s Alexa and its launch to Echo Dot, ushering in for what we call today &#8220;The Smart Speaker&#8221; &#8211; and the birth of Voicebot.ai.&nbsp;<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400;\">The Smart Speaker will play out for years to come but we expect that the voice assistant revolution will later morph into an ambient voice revolution where it isn&#8217;t constrained by limited devices, or assigned user tasks. Instead, they will be embedded into the environments we inhabit.&nbsp;<\/span><\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-style-default\"><img decoding=\"async\" width=\"828\" height=\"464\" src=\"http:\/\/blog.guvi.in\/wp-content\/uploads\/2021\/07\/Screenshot-2021-07-28-at-11.17.45-AM.png\" alt=\"Voice-Assistant \" class=\"wp-image-5031\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/Screenshot-2021-07-28-at-11.17.45-AM.png 828w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/Screenshot-2021-07-28-at-11.17.45-AM-300x168.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/Screenshot-2021-07-28-at-11.17.45-AM-768x430.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/Screenshot-2021-07-28-at-11.17.45-AM-600x336.png 600w\" sizes=\"(max-width: 828px) 100vw, 828px\" title=\"\"><figcaption class=\"wp-element-caption\">MODERN ERA TRANSITIONS TO THE SMART SPEAKER REVOLUTION<\/figcaption><\/figure>\n\n\n\n<p><em>If you would like to explore Python programming through a Self-paced course, try HCL GUVI\u2019s <strong><a href=\"https:\/\/www.guvi.in\/courses\/programming\/python\/?utm_source=Blog&amp;utm_medium=hyperlink&amp;utm_campaign=Build+your+own+personal+voice+assistant+like+Siri%2C+Alexa+using+Python&amp;utm_term=Artificial+Intelligence\" data-type=\"link\" data-id=\"https:\/\/www.guvi.in\/courses\/programming\/python\/?utm_source=Blog&amp;utm_medium=organic&amp;utm_campaign=Build+your+own+personal+voice+assistant+like+Siri%2C+Alexa+using+Python&amp;utm_term=Artificial+Intelligence\" target=\"_blank\" rel=\"noreferrer noopener\">Python Course with IIT Certification<\/a>. <\/strong><\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400;\"><strong>Let&#8217;s get started with Personal Voice-Assistant AI development&nbsp;<\/strong><\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400;\">Let&#8217;s make a distinction here before we start. If you want to build voice and NLP capabilities into your own application, you have several cloud and API options. For Apple, you can use their Sirikit API, along with the $99 cost of registering yourself as an Apple developer and publishing on the Apple Store. One such example is Swiggy and its UI voice command to track the delivery partner. Other cloud options include Amazon&#8217;s Alexa with AWS account &amp; Google Now.&nbsp;<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400;\">But in case you don&#8217;t wanna lock yourself in a particular ecosystem, you can develop your own system to enable voice-assistant. It&#8217;s just a matter of speech recognition, a pipeline, a rules engine, a query parser, and pluggable architecture with open APIs<\/span><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400;\"><strong>The components &amp; Python Packages for Voice interface&nbsp;<\/strong><\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400;\">Now we&#8217;d like to discuss the basic technologies in AI voice assistants. Simply put, what makes it different from a visual one, and characterize it as a voice interface.&nbsp;<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400;\">There are few components of Voice assistant:&nbsp;<\/span><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><b>Voice Input\/output<\/b><\/h3>\n\n\n\n<p><b> <\/b><span style=\"font-weight: 400;\">It implies that the user does not need to touch their screen or GUI elements to make a request. Voice command is more than enough. Our voice assistant software will perform the given task using STT. They convert voice tasks given by the user into text scripts, analyze and perform them. We will be using <\/span><b>Speech recognition &amp; the pyttsx3<\/b><span style=\"font-weight: 400;\"> package library to convert speech to text and vice versa. The packages support Mac OS X, Linux, and Windows.&nbsp;<\/span><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><b>NLP &amp; Intelligent<\/b> <b>Interpretation<\/b><\/h3>\n\n\n\n<p><span style=\"font-weight: 400;\">Our voice assistant shouldn&#8217;t be limited to certain catchphrases, the user should be free while communicating. The response is made by tagging certain elements that can be credible for your user. We will be integrating <\/span><b>Wolfram Alpha API <\/b><span style=\"font-weight: 400;\">to compute expert-level answers using Wolfram&#8217;s knowledge base algorithms and AI technology. All made possible by Wolfram Language.&nbsp;<\/span><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><b>Subprocesses<\/b><\/h3>\n\n\n\n<p><b> <\/b><span style=\"font-weight: 400;\">This is a standard library from Python to process various system commands like to log off or restart, predict the current time, and set alarms. We will be using <\/span><b>OS<\/b><span style=\"font-weight: 400;\"> Library in python to enable the functions to interact with the operating system.&nbsp;<\/span><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><b>Compress the speech<\/b><\/h3>\n\n\n\n<p><span style=\"font-weight: 400;\">This feature of our voice assistant is responsible for the fast delivery of a command response to the user. We will use <\/span><b>JSON <\/b><span style=\"font-weight: 400;\">Module for storing and exchanging data. It&#8217;s reliable and fast.<\/span><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><b>Other libraries<\/b><\/h3>\n\n\n\n<p><span style=\"font-weight: 400;\">Apart from the essential features, we will use several other Python libraries such as <\/span><b>Wikipedia, Ecapture, Time, DateTime, request, and others <\/b><span style=\"font-weight: 400;\">to enable more functions.&nbsp;<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400;\">To begin with, it&#8217;s necessary to install all the above-mentioned package libraries in your system using the <\/span><i><span style=\"font-weight: 400;\">pip command<\/span><\/i><span style=\"font-weight: 400;\">.&nbsp;<\/span>If you wanna clear your Python Fundamentals, visit here. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Writing script for Personal Voice Assistants <\/h2>\n\n\n\n<figure class=\"wp-block-image size-large is-style-default\"><img decoding=\"async\" width=\"1024\" height=\"640\" src=\"http:\/\/blog.guvi.in\/wp-content\/uploads\/2021\/07\/7806-1024x640.jpg\" alt=\"voice-assistant\" class=\"wp-image-5045\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/7806-1024x640.jpg 1024w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/7806-300x188.jpg 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/7806-768x480.jpg 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/7806-1536x960.jpg 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/7806-2048x1280.jpg 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/7806-600x375.jpg 600w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/7806-945x591.jpg 945w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" title=\"\"><figcaption class=\"wp-element-caption\"> <\/figcaption><\/figure>\n\n\n\n<p>First of all, let&#8217;s import all the libraries using the pip command or terminal. For sake of clarity, we&#8217;ll name our personal voice assistant <strong>&#8220;JARVIS-One&#8221;<\/strong>. ( Any Resemblance is uncanny )<\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">import speech_recognition as sr\nimport pyttsx3\nimport datetime\nimport os\nimport time\nimport subprocess\nimport wikipedia\nimport webbrowser\nfrom ecapture import ecapture as ec\nimport wolframalpha\nimport json\nimport requests<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Setting up Speech Engine <\/h3>\n\n\n\n<p>We are going to use <strong>Sapi5<\/strong>, a Microsoft text to speech engine for voice recognition. The Pyttsx3 module is stored in a variable name engine. We can set the voice id as either 0 or 1. &#8216;0&#8217; indicates male voice &amp; &#8216;1&#8217; indicates Female Voice. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">engine=pyttsx3.init('sapi5')\nvoices=engine.getProperty('voices')\nengine.setProperty('voice','voices[0].id')<\/pre>\n\n\n\n<p>Further, we will define a function <strong>speak<\/strong> which will convert <a href=\"https:\/\/krispcall.com\/tools\/text-to-speech\/\" target=\"_blank\" data-type=\"link\" data-id=\"https:\/\/krispcall.com\/tools\/text-to-speech\/\" rel=\"noreferrer noopener\">text to speech<\/a>. The <strong>speak <\/strong> function will take the texts as an argument and it will further initialise the engine. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">&#8216;<strong>RunAndWait&#8217;<\/strong> Command <\/h3>\n\n\n\n<p>Just as the name suggests, this function blocks other voice requests while processing all currently queued commands. It invokes callbacks for appropriate engine notification and returns back all the commands queued before the next call are emptied from the queue. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">def speak(text):\n    engine.say(text)\n    engine.runAndWait()<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Greeting the User <\/h3>\n\n\n\n<p>The Python Library supports <strong>wishMe <\/strong>function for personal voice assistant to greet the user. The <strong>now().hour <\/strong> function abstract&#8217;s the hour from the current time. <\/p>\n\n\n\n<p id=\"100b\">If the hour is greater than zero and less than 12, the voice assistant wishes you with the message \u201cGood Morning &lt;F_name&gt;\u201d.<\/p>\n\n\n\n<p id=\"9f97\">If the hour is greater than 12 and less than 18, the voice assistant wishes you the following message \u201cGood Afternoon &lt;F_name&gt;\u201d.<\/p>\n\n\n\n<p id=\"c466\">Else it voices out the message \u201cGood evening\u201d<\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">def wishMe():\n    hour=datetime.datetime.now().hour\n    if hour&gt;=0 and hour&lt;12:\n        speak(\"Hello F_name,Good Morning\")\n        print(\"Hello F_name,Good Morning\")\n    elif hour&gt;=12 and hour&lt;18:\n        speak(\"Hello F_name,Good Afternoon\")\n        print(\"Hello F_name,Good Afternoon\")\n    else:\n        speak(\"Hello F_name,Good Evening\")\n        print(\"Hello F_name,Good Evening\")<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Setting up command function for our personal voice assistant <\/h3>\n\n\n\n<p>Now we need to define a specific function <strong>takecommand <\/strong>for the personal voice assistant to understand, adapt and analyze the human language. The microphones capture the voice input and the recognizer recognizes the speech to give a response. <\/p>\n\n\n\n<p>We will also incorporate exception handling to rule out all exceptions during the run time error. The <strong>recognize_Google <\/strong>function uses google audio to recognize speech. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">def takeCommand():\n    r=sr.Recognizer()\n    with sr.Microphone() as source:\n        print(\"Listening...\")\n        audio=r.listen(source)\n        try:\n            statement=r.recognize_google(audio,language='en-in')\n            print(f\"user said:{statement}\\n\")\n        except Exception as e:\n            speak(\"Pardon me, please say that again\")\n            return \"None\"\n        return statement\nprint(\"Loading your AI personal assistant JARVIS-One\")\nspeak(\"Loading your AI personal assistant JARVIS-One\")\nwishMe()<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">The Ongoing Function <\/h3>\n\n\n\n<p>The main function starts from here, the command given by the human interaction\/user is stored in the variable <strong>statement. <\/strong><\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">if __name__=='__main__':\n    while True:\n        speak(\"How can I help you now?\")\n        statement = takeCommand().lower()\n        if statement==0:\n            continue<\/pre>\n\n\n\n<p>The voice assitant-JARVIS can now listen to some trigger words assigned by the user. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">if \"good bye\" in statement or \"ok bye\" in statement or \"stop\" in statement:\n            speak('your personal assistant JARVIS-one is shutting down,Good bye')\n            print('your personal assistant JARVIS-one is shutting down,Good bye')\n            break<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Summoning Skills <\/h2>\n\n\n\n<p>Now that we have finished setting up the voice assistant, we will build the essential skills. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1. Accessing Data from Web Browsers-G-Mail, Google Chrome &amp; YouTube <\/h3>\n\n\n\n<p>The <strong>Open_new_tab <\/strong>function accepts web browser URL&#8217;s as a parameter that needs to be accessed. While <strong>Python time sleep function <\/strong>delays the execution of the program for a given time. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">   elif 'open youtube' in statement:\n            webbrowser.open_new_tab(\"https:\/\/www.youtube.com\")\n            speak(\"youtube is open now\")\n            time.sleep(5)\n        elif 'open google' in statement:\n            webbrowser.open_new_tab(\"https:\/\/www.google.com\")\n            speak(\"Google chrome is open now\")\n            time.sleep(5)\n        elif 'open gmail' in statement:\n            webbrowser.open_new_tab(\"gmail.com\")\n            speak(\"Google Mail open now\")\n            time.sleep(5)<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">2. Fetching Data with Wikipedia API<\/h3>\n\n\n\n<p>Once we have successfully imported the Wikipedia API, we will use the following command to extract data from it. The <strong>wikipedia.summary() <\/strong>function helps users ask for any trivia, and execute it with a short summary as a <strong>variable result. <\/strong><\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">if 'wikipedia' in statement:\n            speak('Searching Wikipedia...')\n            statement =statement.replace(\"wikipedia\", \"\")\n            results = wikipedia.summary(statement, sentences=3)\n            speak(\"According to Wikipedia\")\n            print(results)\n            speak(results)<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">3. Time Prediction <\/h3>\n\n\n\n<p><strong>JARVIS-one <\/strong>can predict the current time from <strong>datetime.now() <\/strong>function, which will display time in hour, minute &amp; second in a variable name <strong>strTime. <\/strong><\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">        elif 'time' in statement:\n            strTime=datetime.datetime.now().strftime(\"%H:%M:%S\")\n            speak(f\"the time is {strTime}\")<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">4. Clicking Pictures <\/h3>\n\n\n\n<p>The <strong>ec.capture() <\/strong>function enables JARVIS-One click pictures from your camera. It has 3 parameters: Camera Index, Window Name &amp; Save Name. <\/p>\n\n\n\n<p>If there are two webcams, the first will has an indication with &#8216;0&#8217;, and the second will have an indication of &#8216;1&#8217;. Moreover, it can either be a string or a variable. In case you don&#8217;t wanna access this window, type as False. <\/p>\n\n\n\n<p>You can also give the name to the clicked image, if you don&#8217;t wish to save the image, type as <strong>False. <\/strong><\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">  elif \"camera\" in statement or \"take a photo\" in statement:\n            ec.capture(0,\"robo camera\",\"img.jpg\")<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">5. To fetch latest news <\/h3>\n\n\n\n<p>JARVIS-One is programmed to fetch top headline news from Time of India by using the web browser function.<\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">    elif 'news' in statement:\n            news = webbrowser.open_new_tab(\"https:\/\/timesofindia.indiatimes.com\/home\/headlines\u201d)\n            speak('Here are some headlines from the Times of India,Happy reading')\n            time.sleep(6)<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">6. Fetching Data from web <\/h3>\n\n\n\n<p>The <strong>open_new_tab() <\/strong>function will help search and extract data from a web browser. For instance, you can search for pictures of blue dandelions. <strong>Jarvis-One <\/strong>will help open google images and fetch them. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">        elif 'search'  in statement:\n            statement = statement.replace(\"search\", \"\")\n            webbrowser.open_new_tab(statement)\n            time.sleep(5)<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">7. Wolfram Alpha API for geographical and computational questions <\/h3>\n\n\n\n<p>Third-party API Wolfram Alpha API enables Jarvis-one to answer computational and geographical questions. However, to access Wolfram alpha API, you need to create an account and have a unique app ID from their <a href=\"https:\/\/www.wolframalpha.com\" target=\"_blank\" rel=\"noreferrer noopener\">official website<\/a>. The&nbsp;<strong>client<\/strong>&nbsp;is an instance (class) created for wolfram alpha whereas&nbsp;<strong>res<\/strong>&nbsp;variable stores the response given by the wolfram alpha.<\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">lif 'ask' in statement:\n            speak('I can answer to computational and geographical questions  and what question do you want to ask now')\n            question=takeCommand()\n            app_id=\"Paste your unique ID here \"\n            client = wolframalpha.Client('R2K75H-7ELALHR35X')\n            res = client.query(question)\n            answer = next(res.results).text\n            speak(answer)\n            print(answer)<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">8. Weather Forecasting<\/h3>\n\n\n\n<p>With an API key from Open Weather Map, your personal voice assistant can detect weather. It is an online service that offers weather data for all locations. We can use <strong>city_name_variables <\/strong>command using <strong>takecommand() function. <\/strong>Here is the following code. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">        elif \"weather\" in statement:\n            api_key=\"Apply your unique ID\"\n            base_url=\"https:\/\/api.openweathermap.org\/data\/2.5\/weather?\"\n            speak(\"what is the city name\")\n            city_name=takeCommand()\n            complete_url=base_url+\"appid=\"+api_key+\"&amp;q=\"+city_name\n            response = requests.get(complete_url)\n            x=response.json()\n            if x[\"cod\"]!=\"404\":\n                y=x[\"main\"]\n                current_temperature = y[\"temp\"]\n                current_humidiy = y[\"humidity\"]\n                z = x[\"weather\"]\n                weather_description = z[0][\"description\"]\n                speak(\" Temperature in kelvin unit is \" +\n                      str(current_temperature) +\n                      \"\\n humidity in percentage is \" +\n                      str(current_humidiy) +\n                      \"\\n description  \" +\n                      str(weather_description))\n                print(\" Temperature in kelvin unit = \" +\n                      str(current_temperature) +\n                      \"\\n humidity (in percentage) = \" +\n                      str(current_humidiy) +\n                      \"\\n description = \" +\n                      str(weather_description))\nview rawVoice_assistant.py hosted with \u2764 by GitHub<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">9. Credits <\/h3>\n\n\n\n<p>It will add an element of fun to program <strong>Jarvis_ONE <\/strong> to answer the questions such as &#8220;what it can do&#8221; and &#8220;who created it&#8221;. <\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">       elif 'who are you' in statement or 'what can you do' in statement:\n            speak('I am JARVIS-one version 1 point O your personal assistant. I am programmed to minor tasks like'\n                  'opening youtube,google chrome, gmail and stackoverflow ,predict time,take a photo,search wikipedia,predict weather' \n                  'In different cities, get top headline news from times of india and you can ask me computational or geographical questions too!')\n        elif \"who made you\" in statement or \"who created you\" in statement or \"who discovered you\" in statement:\n            speak(\"I was built by F_NAME\")\n            print(\"I was built by F_NAME\")<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">10. Subprcesses-Log Off Your System <\/h3>\n\n\n\n<p>The&nbsp;<strong>subprocess.call()<\/strong>&nbsp;function here is used to process the system function to log off or to turn off your PC. Further, it invokes your AI assistant to automatically turn off your PC.<\/p>\n\n\n\n<pre class=\"wp-block-syntaxhighlighter-code\">        elif \"log off\" in statement or \"sign out\" in statement:\n            speak(\"Ok , your pc will log off in 10 sec make sure you exit from all applications\")\n            subprocess.call([\"shutdown\", \"\/l\"])\n\t\t\t\ntime.sleep(3)<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Wrapping up <\/h2>\n\n\n\n<p>Now that you have got the hang of it, you can build your own personal voice assistant from scratch. Similarly, you can incorporate so many other free APIs available to enable more functionalities. <\/p>\n\n\n\n<p>In case you want to realign your code, visit this <a href=\"https:\/\/github.com\/mmirthula02\/AI-Personal-Voice-assistant-using-Python\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Git Repository.<\/a> (All credit goes to the developer). <em>HCL<\/em> GUVI is an IIT-M incubated springboard for knowledge and has helped millions of students with their programming journey. <\/p>\n\n\n\n<p><em>If you would like to explore Python programming through a Self-paced course, try <em>HCL<\/em><\/em> <em>GUVI\u2019s <strong><a href=\"https:\/\/www.guvi.in\/courses\/programming\/python\/?utm_source=Blog&amp;utm_medium=hyperlink&amp;utm_campaign=Build+your+own+personal+voice+assistant+like+Siri%2C+Alexa+using+Python&amp;utm_term=Artificial+Intelligence\" data-type=\"link\" data-id=\"https:\/\/www.guvi.in\/courses\/programming\/python\/?utm_source=Blog&amp;utm_medium=organic&amp;utm_campaign=Build+your+own+personal+voice+assistant+like+Siri%2C+Alexa+using+Python&amp;utm_term=Artificial+Intelligence\" target=\"_blank\" rel=\"noreferrer noopener\">Python Course with IIT Certification<\/a>. <\/strong><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The world we see today is a sci-fi utopian, bending technology, design, and nature together in a way that is harmonious and seamless. We effortlessly use a plethora of technologies. For instance, could you imagine a decade ago that you would be able to talk to a phone, console, or speaker and it would perform [&hellip;]<\/p>\n","protected":false},"author":11,"featured_media":5044,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[933],"tags":[],"views":"32428","authorinfo":{"name":"Tushar Vinocha","url":"https:\/\/www.guvi.in\/blog\/author\/tushar\/"},"thumbnailURL":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/Screenshot-2021-07-29-at-11.23.58-AM-300x168.png","jetpack_featured_media_url":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2021\/07\/Screenshot-2021-07-29-at-11.23.58-AM.png","_links":{"self":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/5028"}],"collection":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/comments?post=5028"}],"version-history":[{"count":36,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/5028\/revisions"}],"predecessor-version":[{"id":90709,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/5028\/revisions\/90709"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media\/5044"}],"wp:attachment":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media?parent=5028"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/categories?post=5028"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/tags?post=5028"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}