top of page

Featured image by Pixlr AI Image Generator

The purpose of this post is not to provide the answers, but rather to share some study notes for passing the PL-300 exam on the first attempt.


Data Framework

  • Storage mode: property that you can set on each table in your model, controls how Power BI caches the table data 

    • Configure in Power BI Desktop Model view

  • Import: allows you to create a local Power BI copy of your semantic models from your data source]

    • Can use all Power BI service features with this storage mode, including Q&A and Quick Insights

    • Data refreshes can be scheduled or on-demand

    • Default mode for creating new Power BI reports. 

  • DirectQuery: direct connection to the data source

    • Data won't be cached

    • Ensures that you're always viewing the most up-to-date data, and that all security requirements are satisfied.

    • Solves data latency issues for large semantic models 

    • Automatic page refresh in Power BI Desktop and Power BI Service

      • Fixed interval: update all visuals in a report page based on a constant interval such as one second or five minutes

      • Change detection: refresh visuals on a page based on detecting changes in the data rather than a specific refresh interval

  • Dual/Composite: can have both import and DirectQuery

    • Hybrid tables combine changing real-time data from DirectQuery with static old data from import

    • Tables with this setting can act as either cached or not cached, depending on the context of the query that's submitted to the semantic model

    • Setting the related dimension tables to dual storage mode will allow Power BI to satisfy higher-grain queries entirely from cache

  • Incremental Refresh: refresh large semantic models quickly and as often as needed, without having to reload historical data each time 

    • Define filter parameters RangeStart and RangeEnd configure the start and end of where Incremental refresh should occur 

  • Star schema: data structure with a single table for each dimension (ex. product), and one or more fact tables

  • Settings

    • Filters Query reduction: instantly apply basic filter changes

    • Persistent filters of Report: prevent users from saving filters in the Power BI service

    • Query reduction

      • Reduce number of queries: disables the default behavior that automatically applies cross-highlighting and filtering of visuals within the same report

    • Slicers Query reduction: instantly apply slicer changes and add an Apply button to each slicer


Data Connections

  • Azure Data Box: solution for migrating data to Azure

  • Power BI gateway (on-premises data gateway): allows multiple users to connect to multiple on-premises data sources

    • Personal mode only allows one user to access multiple data sources

  • VPN: provides connectivity to online data sources such as Azure virtual network

    • Not installed locally and only works with data sources secured by virtual networks


DAX calculations

  • Calculated column: calculation used to add a column to a tabular model by writing a DAX formula that returns a scalar value, and is evaluated in row context

    • Can be added to an Import or DirectQuery storage mode table.

    • Can be used in a slicer to place filter options on the report page (“Filters on this page”)

  • Measure: calculation that is performed on the fly, based on the data in the columns of the table 

    • Can only be placed per visual, in the "Filters on this visual" well of the Filters pane 

  • Compound measure: measure that references another measure 

  • Implicit measure: an automatically generated calculation to summarize column values 

    • End-users can select from one of nine aggregations when placed in the Values well of a visual

    • Both implicit and explicit measures can be used as a Drillthrough field, to create quick measures, and with Field Parameters

  • Quick measure: feature in Power BI Desktop that eliminates the need to write DAX formulas for commonly defined measures 

    • Ex. average per category, rank, and difference from baseline 

  • Apply calculations to fields 

  • * Time intelligence functions have performance implications and are disabled for quick measures against DirectQuery tables 

  • Calculated table: model calculation used to add a table to a tabular model by writing a DAX formula that returns a table object that uses Import storage mode.

    • Store data inside the model, so adding them results in a larger model size 

    • Not evaluated in any context 

    • Duplicates data, not including model configurations such as model visibility or hierarchies


DAX functions

  • CALCULATE: provides the result of the calculation with the ability to override context

  • CALENDAR: returns a table with a column named Date that contains a contiguous set of dates based on the start and end dates you specify

  • CALENDARAUTO: returns a table with a column named Date that contains a contiguous set of dates based on data in the model

  • COUNTROWS: summarizes a table by returning the number of rows 

  • CROSSJOIN: returns a Cartesian product of all rows from all tables that the function references

  • EXCEPT: returns rows from one table which do not appear in another table

  • FILTER: returns a table that represents a subset of another table or expression

  • IGNORE: omits specific expressions from the BLANK/NULL evaluation

  • PATH: returns a string with identifiers of all the parents of the current identifier, which is used for flattening

  • PATHITEM: returns the item at the specified position of a string, which is also used for flattening

  • RELATED: returns a related value from another table

  • SUMX: returns the sum of an expression evaluated for each row in a table


M Functions

  • date: creates a date value based on the data parameters you specify

  • duration: allows you to specify the datetime values that will be entered into individual rows of a date table

  • List.Combine(): combines multiple lists into one

  • List.Durations: returns a list of count duration values

Reports

  • Options 

    • Pin visual: pin the visual to an existing dashboard or create a new one 

    • Copy visual as image: copy a visual as an image to Clipboard 

    • Export data: export data in xlsx or csv 

    • Spotlight: highlight a visual on the report page 

  • Interaction behaviour 

    • Filter: shows you the filtered data in this visual 

    • Highlight: the default interaction between visuals, shows you both the unfiltered and filtered values in the visual, for comparison purposes 

    • Drillthrough: page navigation experience that takes you from one page to another plus applies a set of filters to page navigated to 

    • Expand: a way to navigate down a level using the hierarchy controls. 

  • Accessibility: consider how report consumers with no-to-low vision or other physical disability can fully experience the reports 

    • Use contrasting colors, clear and large-sized fonts, well-spaced and large visuals, intuitive navigation  

  • Form factor: size of the hardware that is used to open reports and page orientation (portrait or landscape)

  • Dashboard: canvas of report elements that can be built in Power BI service, and can connect multiple data sources *unlike reports which can only use one single data source


Analytics & Visualization

  • Features

    • Analyze in Excel: create an Excel workbook containing the entire semantic model for a specific Power BI report and analyze that semantic model in Excel using PivotTables, Pivot Charts, and other Excel features

    • Q&A feature: create a visual by typing in a question about your data

    • Select "Ask a question about your data"

    • The visual can be pinned to a dashboard without adding it to a report

Example cluster chart
  • Visuals

    • Clustering: identify a segment (cluster) of data that is similar to each other but dissimilar to the rest of the data 

    • Decomposition Tree: visualize data between multiple dimensions and drill down in any order

    • Key influencers: helps you understand correlated factors impacting a particular metric

    • Python: create by first enabling the script visuals option in the Visualization pane in Power BI Desktop

    • No need to actually install python on your computer

    • Q&A visual: allows end-users to ask natural language questions to create AI-generated charts

    • Smart Narrative: combines natural language text with metrics from your model in sentence forms

  • Personalize visuals

    • Enable in either Power BI Desktop or Power BI Service

    • Switch on/off at a page or visual level

    • Perspectives: choose a subset of a model that provides a more focused view

    • End-user modifications:

      • Change the visualization type.

      • Swap out a measure or dimension.

      • Add or remove a legend.

      • Compare two or more measures.

      • Change aggregations, etc.


Data Administration

  • Lineage view: view and troubleshoot the data flow from source to destination 

  • Data alerts: Power BI sends an alert to your Notification center and, optionally, an email 

    • Alerts only work on streaming datasets if you build a KPI, card, or gauge report visual and then pin that visual to the dashboard. 

  • Row-level security (RLS): design technique to restrict access to subsets of data for specific users

    • In a tabular model, RLS is achieved by creating model roles. 

      • Power BI Desktop > Manage roles

    • Roles have rules, which are DAX expressions to filter table rows. 

    • How to create a role: 

      • Model view - design and implement structure of a dataset and includes the option to create a role 

      • Report view - provides the ability to manage roles, including their creation 

    • Use Power BI Desktop or Power BI service to test RLS 

      • USEROBJECTID() returns the SID of the sign-in user but not their name 

      • USERPRINCIPALNAME() returns which user is signed in to view a report 

    • RLS is not required on a dataset to become discoverable

  • Performance analyzer: see and record logs that measure how each of your report elements performs when users interact with them and which aspects of their performance are most (or least) resource intensive

    • Before running Performance analyzer, to ensure the most accurate results in your analysis (test), start with a clear visual cache and a clear data engine cache

      • To clear the visual cache, add a blank page to the .pbix file and select it

      • To clear the data cache, either restart Power BI Desktop or connect DAX Studio to the semantic model and then call Clear Cache 

  • Endorse semantic models 

    • Promotion: promote your semantic models when they're ready for broad usage 

      • Promoted dataset can be configured to be discoverable for users without access to request permissions to access

    • Certification: request certification for a promoted semantic model 

      • Certified dataset can be configured to be discoverable for users without access to request permissions to access

      • Certification can be a highly selective process, so only the truly reliable and authoritative semantic models are used across the organization    

  • Workspace settings

    • Workspace OneDrive: configure a Microsoft 365 group whose SharePoint Online document library is available to workspace users once the workspace is created

    • Allow contributors to update the app: provide additional permissions for workspace contributors

    • Develop a template: set up a template app workspace

    • License mode: choose between Pro, Premium per user, Premium per capacity, and Embedded licensing

.

There is a free practice assessment on Microsoft Learn. To be better prepared for the exam, it is recommended to achieve 80% or higher in multiple attempts, which is what I did as well.

Screenshot of Power BI PL-300 practice assessment results

For each practice assessment that you complete, Microsoft Learn provides a score breakdown and detailed explanations for every question.

Screenshot of Power BI PL-300 practice assessment results

Note that Microsoft periodically update the exam content. I happened to take my exam right after the April'24 update, and they had just added some new skill areas: automatic page refresh, personalized visuals, and accessibility.


Posted: Sept. 29, 2021

Updated: Feb. 22, 2024

Featured image by Pixlr AI Image Generator


Technical SEO is unrelated to the actual content on your site, but still affects organic rankings. This post serves as a guide of what check in a technical SEO audit.


XML Sitemap

XML sitemap is a file that helps search engines to understand your website structure and crawl it. It includes a list of all pages on your site, the pages prioritization, when they were last modified, and how frequently they are updated. Usually, the pages will be categorized by topic, post, product, etc.


You'll probably check the sitemap at the beginning of a technical audit. Find the sitemap of any page by typing /sitemapl.xml after the URL, for example,

https://datachai.com/sitemap.xml. If the site has multiple sitemaps, use /sitemap_index.xml


Register your sitemap with Google Search Console, which includes several tools to check technical SEO metrics such as mobile optimization and page speed. The Google Search Console XML Sitemap Report will give you the technical insight to achieve 1:1 ratio of URLs added to the site and updating the sitemap.


Ideally, your site has an internal linking structure that connects all pages efficiently, so you don’t need a sitemap. It’s actually optional. But it’s best practice for large sites to have a sitemap.


If your site has a sitemap, it's best practice to include all 200 OK URLs, and have a 1:1 ratio of exact URLs in the sitemap as there are on the site. 4xx and 5xx URLs, orphaned pages, and parameters should be removed.


Server Response Code and Redirects

Bulk check source codes with this Google Apps Script:

// Get http server response status code
function getStatusCode(url){
  var options = {
     'muteHttpExceptions': true,
     'followRedirects': false
   };
  var statusCode ;
  try {
  statusCode = UrlFetchApp .fetch(url) .getResponseCode() .toString() ;
  }
  
  catch( error ) {
  statusCode = error .toString() .match( / returned code (\d\d\d)\./ )[1] ;
  }

  finally {
  return statusCode ;
  }
}

// Exceed importxml limit
function importRegex(url, regexInput) { 
  var output = ''; 
  var fetchedUrl = UrlFetchApp.fetch(url, {muteHttpExceptions: true}); 
  if (fetchedUrl) { 
    var html = fetchedUrl.getContentText(); 
    if (html.length && regexInput.length) { 
      output = html.match(new RegExp(regexInput, 'i'))[1]; 
    } 
  } 
  Utilities.sleep(1000); 
  return unescapeHTML(output); 
} 

Then, use RegEx redirects if you need to bulk redirect multiple source URLs to the same destination, or .htaccess file for smaller scale redirects. However, if your site is hosted on WordPress, be careful about using .htaccess because it will be depreciated in php 7.4 and subsequent versions. WP Engine suggests alternatives such as using RegEx directly on WordPress or managing redirects in Yoast SEO Premium. Finally, if you are completely removing a page, orphan it then use a 410 so that Google can remove it more quickly.


JavaScript SEO

JavaScript is increasingly used across the web these days. Sites that have been developed with JavaScript frameworks such as React, Angular, or Vue.js have unique SEO challenges that are different from sites that have been built with CMS such as WordPress. For example, if you experience problems with getting Google to crawl your site, you can troubleshoot in Google Search Console.



Page Speed

On the topic of page speed, other best practices include using a fast DNS provider and minimizing http requests by keeping the CSS style sheet, scripts, and plugins to a minimum. You can also compress web pages by reducing image file size and cleaning up the code, especially for content in the first view. PageSpeed Insights is a free tool to check your page speed, which also provides specific recommendations on how to improve it.


robots.txt

The robots.txt file, also called the robots exclusion protocol or standard, is a text file that tells search engines which pages to crawl or not crawl. You can see the robots.txt file for any website by adding /robots.txt to the end of the domain. For example, https://www.datachai.com/robots.txt


Search engines look at robots.txt first before crawling a site, so a disallowed page will be completely excluded.


Crawl Budget

Crawl budget is the number of pages Google crawls and indexes on a website within a given timeframe. If your pages exceed your site's crawl budget, Googlebot will not index the balance, which can negatively affect your rankings.


Performing regular log file analysis can provide insights about how Googlebot (and other web crawlers and users) are crawling your website, giving you the necessary information to optimize the crawl budget. If your site is large and has crawl budget issues, you can adjust the crawl rate via search console.


SSL (Secure Sockets Layer)

SSL (secure sockets layer) is a security technology that creates an encrypted link between a web server and browser. It’s clear if a website is using SSL because the URL will start with https (hypertext transfer protocol), not http. In 2014, Google announced that they want to see https everywhere, and websites using SSL will get priority for SEO. Google Chrome now displays warnings anytime a user visits a site that does not use SSL.


These days, most top website builders such as Wix include SSL by default. If not, simply install an SSL certificate on your website.


Canonical Link Element

Even if you don't have multiple parameter-based URLs of each page, different versions of your pages using https, http, www. .html, etc. can quickly add up. That's where the rel=canonical tag comes in - allowing you to manage duplicate content by specifying the canonical or preferred version of your page. This functions to report duplicate content and tell Google to consolidate the ranking signals, so your page won't be disadvantaged.


If you are using a CMS like Wix or Squarespace, your web hosting service might automatically add canonical tags with the clean URL. For example, my homepage already has one as well.

<link rel="canonical" href="https://www.datachai.com"/>

Schema

Schema, also called structured data markup, enhances search results through the addition of rich snippets. For example, you can add star ratings or prices for your products. Adding schema by itself is not a technical SEO factor, but it is recommended by Google and can indirectly help improve rankings and increase page views.


Add Schema via WordPress Plugin

If your website is hosted on WordPress, you can use the Schema plugin to add structured markup to your pages. This plugin uses JSON-LD, which is recommended by Google and also supported by Bing.


Add Schema Manually

If your site isn't hosted on WordPress site or you prefer not to rely on a plugin, you can manually add schema with a few more steps. Schema is usually added to the page header, although it's possible to add it to the body or footer as well. Some recent WordPress themes include specific text blogs to add schema to the body.


Note that adding schema through Google Tag Manager is not recommended. If you use Google Tag Manager, the structured data will be hidden within a container, making it difficult for Google's algorithms to read and give it appropriate weight.


To add schema manually, first, use a tool like MERKLE to generate the baseline markup. Although this is already fine to use as is, you can paste the baseline markup in a text editor and continue to edit and customize the structured data for each page.


If you are using Sublime Text, go to View > Syntax > JavaScript > JSON to set your syntax appropriately.


Finally, insert additional properties that were not available on MERKLE as needed.


Add html Strings to Schema

You can add certain basic html strings to your schema markup, for example, if you'd like to include a bulleted list or hyperlink. An important thing to remember here is to escape double quotes when writing html, and simply replace them with single quotes.


Google Search displays the following HTML tags; all other tags are ignored: <h1> through <h6>, <br>, <ol>, <ul>, <li>, <a>, <p>, <div>, <b>, <strong>, <i>, and <em>


Validate Schema

Use Google's Rich Results Testing Tool to make sure that your schema markup is being read properly, and Structured Data Testing Tool to actually see all of the structured data on the page. The final step will be to request indexing for the page that you added markup, via Google Search Console. Within a few days, you should see your markup under the enhancements sidebar.


Note that in June 2021, Google has limited FAQ rich results to a maximum of 2 per snippet, so your snippet real estate may be a bit smaller. If you have 3 or more FAQs marked up, Google will show the 2 that are most relevant to the search query.


Log File Analysis

The log file is your website’s record of every request made to your server. It includes important information such as: the URL of the requested page, http status code, IP address of the request server, timestamp, user agent making the request, request method (GET/POST), client IP, and referrer.


By performing log file analysis, you can gain insights about how Googlebot (and other web crawlers and users) are crawling your website. The log file analysis will help you answer important technical questions such as:

  • How frequently is Googlebot crawling your site?

  • How is the crawl budget being allocated?

  • How often are new and updated pages being crawled?

.

You can identify where the crawl budget is being used inefficiently, such as unnecessarily crawling static or irrelevant pages, and make improvements accordingly.


Obtain the Log File

The log file is stored on your web browser, and you can access it via your server control panels’ file manager, command line, or using an FTP client (recommended).


The server log file is commonly found in the following locations.

  • Apache: /var/log/access_log

  • Nginx: logs/access.log

  • IIS: %SystemDrive%\inetpub\logs\LogFiles

.

Tools and Software

You can convert your .log file to a .csv and analyze it in Microsoft Excel or Google Sheets, or use an online log file analyzer such as SEMRush or Screaming Frog Log File Analyser. The best log file analyzer will depend on your website and what tools you might already be using for technical SEO.


Limitations

Performing regular log file analysis can be extremely useful for technical SEO, but there are some limitations. Page access and actions that occur via cache memory, proxy servers, and AJAX will not be reflected in the log file. If multiple users access your website with the same IP address, it will be counted as only one user. On the other hand, if one user uses dynamic IP assignment, the log file will show multiple accesses, overestimating the traffic count.


Core Web Vitals

In May 2020, Google introduced core web vitals as the newest way to measure user experience on a webpage. About one year later, as of June 2021, Google announced an algorithm update to use core web vitals as ranking factors.


There are three core web vitals:

  • Largest Contentful Paint (LCP)

  • First Input Delay (FID)

  • Cumulative Layout Shift (CLS)


The idea is that core web vitals point to a set of user-facing metrics related to page speed, responsiveness, and stability, which should help SEOs and web developers improve overall user experience. Let’s see each of the core web vitals in more depth.


Largest Contentful Paint (LCP)

Largest contentful paint (LCP) is the time from when the page begins loading, to when the largest text block or image element is rendered. The idea is to measure perceived pagespeed by estimating when the page’s main contents have finished loading.


Of course, lower (faster) scores are better. In general, LCP <2.5s is considered to be good, and >4s should be improved.


LCP is one of the more difficult core web vitals to troubleshoot because there are many factors that could cause slow load speed. Some common causes are slow server response time, render-blocking JavaScript or CSS, or the largest content resource being too heavy.

Note that if the largest text block or image element changes while the page is loading, then the most recent one is used to measure LCP. Also, if it's difficult to pass core web vitals in your industry (i.e. most corporate sites have graphics heavy pages), keep in mind that your pages are compared against your close competitors.


First Input Delay (FID)

First input delay (FID) is the time from when a user first interacts with your site, to when the browser can respond. Only single interactions count for FID, such as clicking on a link or tapping a key. Continuous interactions that have different performance constraints are excluded, such as scrolling or zooming.


FID of <100ms is generally considered good, while >300ms should be improved. If your FID score is high, it could be because the browser’s main thread is overloaded with JavaScript code. You can reduce FID by improving issues such as JavaScript execution time and third-party cookies impact.


Cumulative Layout Shift (CLS)

Cumulative layout shift (CLS) is about visual stability on a webpage. Instead of a time measurement, CLS is measured by a layout shift score, which is a cumulative score of all unexpected layout shifts within the viewport that occur during a page’s lifecycle. The layout shift score is the product of impact fraction and distance fraction. Impact fraction is the area of the viewport that the unstable element takes up, and distance fraction is the greatest distance that the unstable element moves between both frames, divided by the viewport’s largest dimension.


<0.1 is considered good, and >.25 is generally considered a poor score. Common causes for poor CLS include images or ads with undefined dimensions, resources loaded asynchronously, or DOM elements dynamically added to a page above existing content. The best practice is to always include size attributes for your images and videos.


How to Measure Core Web Vitals

Core web vitals are incorporated into many Google tools that you probably already use, such as Search Console, Lighthouse, and PageSpeed Insights. In addition, a new Chrome extension called Web Vitals is now available to measure the core web vitals in real time.


User experience and pagespeed often depend on the user’s connection environment and settings as well. Every time a page is loaded, LCP, FID, and CLS will be slightly different. Your site has a pool of users that make up a distribution - some people see the pages fast, others see it slower.

For the purpose of core web vitals, Google measures what the 75th percentile of users see. This and other concepts are discussed in a recent episode of Search Off the Record.


Troubleshooting

Google offers several solutions if you suspect a bug, such as if your core web vitals numbers are poor but your site has been tested and proven to be performing well. You can join the web-vitals-feedback Google group, and email list to provide feedback to Google. Google will consider the feedback when modifying the metrics going forward.


If you're looking for individual support, you can file a bug with the core web vitals template on crbug.com. You'll need to do some technical work - for example, write a little JavaScript that has a performance observer that shows the issue.


Finally, note that the core web vitals explained above are included in Google’s page experience signal. Of course, core web vitals are not the only user experience metrics to focus on. All other web vitals such as total blocking time (TBT), first contentful paint (FCP), speed index (SI), and time to interactive (TII) are non-core web vitals. As Google continuously improves its understanding of user experience, it will update the web vitals regularly. As of November 2021, Google is already preparing two new vitals metrics - smoothness and overall responsiveness.


Mobile UX

Finally, Google has been giving higher ranking to websites that have a responsive or mobile site since April 2015. At the same time, they released the mobile-friendly testing tool to help SEOs ensure that they would not lose rankings after this algorithm update. Also look into AMP (Accelerated Mobile Pages, an open source framework that aims to speed up the delivery of mobile pages via AMP’s html code.

Featured image by Pixlr AI Image Generator

This post will explain how to schedule a python script execution using Windows Task Scheduler or Cronjobs, allowing you to automate tasks using python on both Windows and Mac.


How to Schedule a Python Script Using Windows Task Scheduler

  1. Open the Windows Task Scheduler GUI

  2. Actions > Create Task

  3. In the General tab, give a name for your scheduled task. If you change the Security options from Run only when user is logged on to Run whether user is logged on or not, the script will also run when the computer is sleeping. If the computer is powered off, the script will not run and will not catch up on missed executions if it is later powered on.

  4. In the Actions tab, click New...

  • Action: Start a program

  • Program/script: the location of python executable on your computer, for ex. C:\Users\81701\AppData\Local\Microsoft\WindowsApps\python.exe

    • Get the location by typing where python in command prompt

  • Add arguments: the name of your python file, for ex. yourfile.py

  • Start in: the file path to your python file, for ex. C:\Users\81701\python\yourfile.py

Lastly, trigger your script execution by navigating to the Triggers tab and clicking New...

How to Schedule a Python Script Using Cronjobs on Mac Crontab

  1. Open a terminal window. 

  2. Type the following command to edit the crontab file: crontab -e. This will open the crontab file in the default text editor, usually vi or vim. 

  3. Press the i key to enter Insert mode. 

  4. If there are a bunch of ~ characters, just delete them. They represent empty lines or lines that contain no visible characters. 

  5. Type in the cron job, for example 0 10 MON /path/to/python /path/to/script.py. Some cloud storage services, like OneDrive, might have restrictions on executing files directly from their synced folders. In such cases, move the file to a local directory before scheduling it. To get the file path, right click on the python file in Finder. Press the Option key, then choose 'Copy [filename] as Pathname.' On Mac, the path to python interpreter is usually /usr/bin/python3 

  6. Press the Esc key to exit Insert mode. 

  7. Type :wq to save and exit 

  • : character puts vi in command mode 

  • w command is for writing (saving) the file 

  • q command is for quitting vi 

 

How to check that the cronjob was created successfully? 

To check the crontab entries that you have created, open Terminal again and use the crontab -l command. 

 

What if the computer is powered off at the scheduled time?  

If the computer is sleeping or powered off, the cron daemon will not be able to execute the job. Cron jobs will not catch up on missed executions if you later log in. If you need to run a task even when the computer is powered off, use a cloud-based data platform such as Databricks.  

 

What if the script encounters error?  

There will be a notification "You have mail" in terminal. 

  1. Command mail to see the details 

  2. Command t to read the entire message. 

  3. Command delete * to delete all terminal mails 

  4. Command q to quit mail 

 

How to delete an existing cronjob?  
  1. Open a terminal window. 

  2. Type the following command to edit the crontab file: crontab -e 

  3. Delete the line that contains the cronjob entry. In vi or vim, navigate to the line and press dd to delete it. 

  4. Type :wq to save and exit 


Other ways to schedule a python script to run automatically?

bottom of page