Multiple assignment in Python: Assign multiple values or the same value to multiple variables

In Python, the = operator is used to assign values to variables.

You can assign values to multiple variables in one line.

Assign multiple values to multiple variables

Assign the same value to multiple variables.

You can assign multiple values to multiple variables by separating them with commas , .

You can assign values to more than three variables, and it is also possible to assign values of different data types to those variables.

When only one variable is on the left side, values on the right side are assigned as a tuple to that variable.

If the number of variables on the left does not match the number of values on the right, a ValueError occurs. You can assign the remaining values as a list by prefixing the variable name with * .

For more information on using * and assigning elements of a tuple and list to multiple variables, see the following article.

  • Unpack a tuple and list in Python

You can also swap the values of multiple variables in the same way. See the following article for details:

  • Swap values ​​in a list or values of variables in Python

You can assign the same value to multiple variables by using = consecutively.

For example, this is useful when initializing multiple variables with the same value.

After assigning the same value, you can assign a different value to one of these variables. As described later, be cautious when assigning mutable objects such as list and dict .

You can apply the same method when assigning the same value to three or more variables.

Be careful when assigning mutable objects such as list and dict .

If you use = consecutively, the same object is assigned to all variables. Therefore, if you change the value of an element or add a new element in one variable, the changes will be reflected in the others as well.

If you want to handle mutable objects separately, you need to assign them individually.

after c = []; d = [] , c and d are guaranteed to refer to two different, unique, newly created empty lists. (Note that c = d = [] assigns the same object to both c and d .) 3. Data model — Python 3.11.3 documentation

You can also use copy() or deepcopy() from the copy module to make shallow and deep copies. See the following article.

  • Shallow and deep copy in Python: copy(), deepcopy()

Related Categories

Related articles.

  • NumPy: arange() and linspace() to generate evenly spaced values
  • Chained comparison (a < x < b) in Python
  • pandas: Get first/last n rows of DataFrame with head() and tail()
  • pandas: Filter rows/columns by labels with filter()
  • Get the filename, directory, extension from a path string in Python
  • Sign function in Python (sign/signum/sgn, copysign)
  • How to flatten a list of lists in Python
  • None in Python
  • Create calendar as text, HTML, list in Python
  • NumPy: Insert elements, rows, and columns into an array with np.insert()
  • Shuffle a list, string, tuple in Python (random.shuffle, sample)
  • Add and update an item in a dictionary in Python
  • Cartesian product of lists in Python (itertools.product)
  • Remove a substring from a string in Python
  • pandas: Extract rows that contain specific strings from a DataFrame

Python Tutorial

File handling, python modules, python numpy, python pandas, python matplotlib, python scipy, machine learning, python mysql, python mongodb, python reference, module reference, python how to, python examples, python variables - assign multiple values, many values to multiple variables.

Python allows you to assign values to multiple variables in one line:

Note: Make sure the number of variables matches the number of values, or else you will get an error.

One Value to Multiple Variables

And you can assign the same value to multiple variables in one line:

Unpack a Collection

If you have a collection of values in a list, tuple etc. Python allows you to extract the values into variables. This is called unpacking .

Unpack a list:

Learn more about unpacking in our Unpack Tuples Chapter.

Get Certified

COLOR PICKER

colorpicker

Contact Sales

If you want to use W3Schools services as an educational institution, team or enterprise, send us an e-mail: [email protected]

Report Error

If you want to report an error, or if you want to make a suggestion, send us an e-mail: [email protected]

Top Tutorials

Top references, top examples, get certified.

  • Python Basics
  • Interview Questions
  • Python Quiz
  • Popular Packages
  • Python Projects
  • Practice Python
  • AI With Python
  • Learn Python3
  • Python Automation
  • Python Web Dev
  • DSA with Python
  • Python OOPs
  • Dictionaries

Assigning multiple variables in one line in Python

  • How to input multiple values from user in one line in Python?
  • Python | Assign multiple variables with list values
  • Print Single and Multiple variable in Python
  • Python - Solve the Linear Equation of Multiple Variable
  • How to Catch Multiple Exceptions in One Line in Python?
  • How to check multiple variables against a value in Python?
  • Assign Function to a Variable in Python
  • Replace Multiple Lines From A File Using Python
  • Break a long line into multiple lines in Python
  • How To Combine Multiple Lists Into One List Python
  • Python | Add similar value multiple times in list
  • Convert List of Tuples To Multiple Lists in Python
  • Assigning Values to Variables
  • Get Index of Multiple List Elements in Python
  • How To Print A Variable's Name In Python
  • Exporting variable to CSV file in Python
  • Python | Append multiple lists at once
  • How to Assign Multiple Variables in One Line in PHP ?
  • List As Input in Python in Single Line

A variable is a segment of memory with a unique name used to hold data that will later be processed. Although each programming language has a different mechanism for declaring variables, the name and the data that will be assigned to each variable are always the same. They are capable of storing values of data types.

The assignment operator(=) assigns the value provided to its right to the variable name given to its left. Given is the basic syntax of variable declaration:

 Assign Values to Multiple Variables in One Line

Given above is the mechanism for assigning just variables in Python but it is possible to assign multiple variables at the same time. Python assigns values from right to left. When assigning multiple variables in a single line, different variable names are provided to the left of the assignment operator separated by a comma. The same goes for their respective values except they should be to the right of the assignment operator.

While declaring variables in this fashion one must be careful with the order of the names and their corresponding value first variable name to the left of the assignment operator is assigned with the first value to its right and so on. 

Variable assignment in a single line can also be done for different data types.

Not just simple variable assignment, assignment after performing some operation can also be done in the same way.

Assigning different operation results to multiple variable.

Here, we are storing different characters in a different variables.

Please Login to comment...

Similar reads.

  • python-basics
  • Technical Scripter 2020
  • Technical Scripter

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Mastering Multiple Variable Assignment in Python

Python's ability to assign multiple variables in a single line is a feature that exemplifies the language's emphasis on readability and efficiency. In this detailed blog post, we'll explore the nuances of assigning multiple variables in Python, a technique that not only simplifies code but also enhances its readability and maintainability.

Introduction to Multiple Variable Assignment

Python allows the assignment of multiple variables simultaneously. This feature is not only a syntactic sugar but a powerful tool that can make your code more Pythonic.

What is Multiple Variable Assignment?

  • Simultaneous Assignment : Python enables the initialization of several variables in a single line, thereby reducing the number of lines of code and making it more readable.
  • Versatility : This feature can be used with various data types and is particularly useful for unpacking sequences.

Basic Multiple Variable Assignment

The simplest form of multiple variable assignment in Python involves assigning single values to multiple variables in one line.

Syntax and Examples

Parallel Assignment : Assign values to several variables in parallel.

  • Clarity and Brevity : This form of assignment is clear and concise.
  • Efficiency : Reduces the need for multiple lines when initializing several variables.

Unpacking Sequences into Variables

Python takes multiple variable assignment a step further with unpacking, allowing the assignment of sequences to individual variables.

Unpacking Lists and Tuples

Direct Unpacking : If you have a list or tuple, you can unpack its elements into individual variables.

Unpacking Strings

Character Assignment : You can also unpack strings into variables with each character assigned to one variable.

Using Underscore for Unwanted Values

When unpacking, you may not always need all the values. Python allows the use of the underscore ( _ ) as a placeholder for unwanted values.

Ignoring Unnecessary Values

Discarding Values : Use _ for values you don't intend to use.

Swapping Variables Efficiently

Multiple variable assignment can be used for an elegant and efficient way to swap the values of two variables.

Swapping Variables

No Temporary Variable Needed : Swap values without the need for an additional temporary variable.

Advanced Unpacking Techniques

Python provides even more advanced ways to handle multiple variable assignments, especially useful with longer sequences.

Extended Unpacking

Using Asterisk ( * ): Python 3 introduced a syntax for extended unpacking where you can use * to collect multiple values.

Best Practices and Common Pitfalls

While multiple variable assignment is a powerful feature, it should be used judiciously.

  • Readability : Ensure that your use of multiple variable assignments enhances, rather than detracts from, readability.
  • Matching Lengths : Be cautious of the sequence length. The number of elements must match the number of variables being assigned.

Multiple variable assignment in Python is a testament to the language’s design philosophy of simplicity and elegance. By understanding and effectively utilizing this feature, you can write more concise, readable, and Pythonic code. Whether unpacking sequences or swapping values, multiple variable assignment is a technique that can significantly improve the efficiency of your Python programming.

Multiple Assignment Syntax in Python

  • python-tricks

The multiple assignment syntax, often referred to as tuple unpacking or extended unpacking, is a powerful feature in Python. There are several ways to assign multiple values to variables at once.

Let's start with a first example that uses extended unpacking . This syntax is used to assign values from an iterable (in this case, a string) to multiple variables:

a : This variable will be assigned the first element of the iterable, which is 'D' in the case of the string 'Devlabs'.

*b : The asterisk (*) before b is used to collect the remaining elements of the iterable (the middle characters in the string 'Devlabs') into a list: ['e', 'v', 'l', 'a', 'b']

c : This variable will be assigned the last element of the iterable: 's'.

The multiple assignment syntax can also be used for numerous other tasks:

Swapping Values

This swaps the values of variables a and b without needing a temporary variable.

Splitting a List

first will be 1, and rest will be a list containing [2, 3, 4, 5] .

Assigning Multiple Values from a Function

This assigns the values returned by get_values() to x, y, and z.

Ignoring Values

Here, you're ignoring the first value with an underscore _ and assigning "Hello" to the important_value . In Python, the underscore is commonly used as a convention to indicate that a variable is being intentionally ignored or is a placeholder for a value that you don't intend to use.

Unpacking Nested Structures

This unpacks a nested structure (Tuple in this example) into separate variables. We can use similar syntax also for Dictionaries:

In this case, we first extract the 'person' dictionary from data, and then we use multiple assignment to further extract values from the nested dictionaries, making the code more concise.

Extended Unpacking with Slicing

first will be 1, middle will be a list containing [2, 3, 4], and last will be 5.

Split a String into a List

*split, is used for iterable unpacking. The asterisk (*) collects the remaining elements into a list variable named split . In this case, it collects all the characters from the string.

The comma , after *split is used to indicate that it's a single-element tuple assignment. It's a syntax requirement to ensure that split becomes a list containing the characters.

kushal-study-logo

What is Multiple Assignment in Python and How to use it?

multiple-assignment-in-python

When working with Python , you’ll often come across scenarios where you need to assign values to multiple variables simultaneously.

Python provides an elegant solution for this through its support for multiple assignments. This feature allows you to assign values to multiple variables in a single line, making your code cleaner, more concise, and easier to read.

In this blog, we’ll explore the concept of multiple assignments in Python and delve into its various use cases.

Understanding Multiple Assignment

Multiple assignment in Python is the process of assigning values to multiple variables in a single statement. Instead of writing individual assignment statements for each variable, you can group them together using a single line of code.

In this example, the variables x , y , and z are assigned the values 10, 20, and 30, respectively. The values are separated by commas, and they correspond to the variables in the same order.

Simultaneous Assignment

Multiple assignment takes advantage of simultaneous assignment. This means that the values on the right side of the assignment are evaluated before any variables are assigned. This avoids potential issues when variables depend on each other.

In this snippet, the values of x and y are swapped using multiple assignments. The right-hand side y, x evaluates to (10, 5) before assigning to x and y, respectively.

Unpacking Sequences

One of the most powerful applications of multiple assignments is unpacking sequences like lists, tuples, and strings. You can assign the individual elements of a sequence to multiple variables in a single line.

In this example, the tuple (3, 4) is unpacked into the variables x and y . The value 3 is assigned to x , and the value 4 is assigned to y .

Multiple Return Values

Functions in Python can return multiple values, which are often returned as tuples. With multiple assignments, you can easily capture these return values.

Here, the function get_coordinates() returns a tuple (5, 10), which is then unpacked into the variables x and y .

Swapping Values

We’ve already seen how multiple assignments can be used to swap the values of two variables. This is a concise way to achieve value swapping without using a temporary variable.

Iterating through Sequences

Multiple assignment is particularly useful when iterating through sequences. It allows you to iterate over pairs of elements in a sequence effortlessly.

In this loop, each tuple (x, y) in the points list is unpacked and the values are assigned to the variables x and y for each iteration.

Discarding Values

Sometimes you might not be interested in all the values from an iterable. Python allows you to use an underscore (_) to discard unwanted values.

In this example, only the value 10 from the tuple is assigned to x , while the value 20 is discarded.

Multiple assignments is a powerful feature in Python that makes code more concise and readable. It allows you to assign values to multiple variables in a single line, swap values without a temporary variable, unpack sequences effortlessly, and work with functions that return multiple values. By mastering multiple assignments, you’ll enhance your ability to write clean, efficient, and elegant Python code.

Related: How input() function Work in Python?

variable multiple assignment

Vilashkumar is a Python developer with expertise in Django, Flask, API development, and API Integration. He builds web applications and works as a freelance developer. He is also an automation script/bot developer building scripts in Python, VBA, and JavaScript.

Related Posts

How to Scrape Email and Phone Number from Any Website with Python

How to Scrape Email and Phone Number from Any Website with Python

How to Build Multi-Threaded Web Scraper in Python

How to Build Multi-Threaded Web Scraper in Python

How to Search and Locate Elements in Selenium Python

How to Search and Locate Elements in Selenium Python

CRUD Operation using AJAX in Django Application

CRUD Operation using AJAX in Django Application

Leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Unpacking And Multiple Assignment in

About unpacking and multiple assignment.

Unpacking refers to the act of extracting the elements of a collection, such as a list , tuple , or dict , using iteration. Unpacked values can then be assigned to variables within the same statement. A very common example of this behavior is for item in list , where item takes on the value of each list element in turn throughout the iteration.

Multiple assignment is the ability to assign multiple variables to unpacked values within one statement. This allows for code to be more concise and readable, and is done by separating the variables to be assigned with a comma such as first, second, third = (1,2,3) or for index, item in enumerate(iterable) .

The special operators * and ** are often used in unpacking contexts. * can be used to combine multiple lists / tuples into one list / tuple by unpacking each into a new common list / tuple . ** can be used to combine multiple dictionaries into one dictionary by unpacking each into a new common dict .

When the * operator is used without a collection, it packs a number of values into a list . This is often used in multiple assignment to group all "leftover" elements that do not have individual assignments into a single variable.

It is common in Python to also exploit this unpacking/packing behavior when using or defining functions that take an arbitrary number of positional or keyword arguments. You will often see these "special" parameters defined as def some_function(*args, **kwargs) and the "special" arguments used as some_function(*some_tuple, **some_dict) .

*<variable_name> and **<variable_name> should not be confused with * and ** . While * and ** are used for multiplication and exponentiation respectively, *<variable_name> and **<variable_name> are used as packing and unpacking operators.

Multiple assignment

In multiple assignment, the number of variables on the left side of the assignment operator ( = ) must match the number of values on the right side. To separate the values, use a comma , :

If the multiple assignment gets an incorrect number of variables for the values given, a ValueError will be thrown:

Multiple assignment is not limited to one data type:

Multiple assignment can be used to swap elements in lists . This practice is pretty common in sorting algorithms . For example:

Since tuples are immutable, you can't swap elements in a tuple .

The examples below use lists but the same concepts apply to tuples .

In Python, it is possible to unpack the elements of list / tuple / dictionary into distinct variables. Since values appear within lists / tuples in a specific order, they are unpacked into variables in the same order:

If there are values that are not needed then you can use _ to flag them:

Deep unpacking

Unpacking and assigning values from a list / tuple inside of a list or tuple ( also known as nested lists/tuples ), works in the same way a shallow unpacking does, but often needs qualifiers to clarify the values context or position:

You can also deeply unpack just a portion of a nested list / tuple :

If the unpacking has variables with incorrect placement and/or an incorrect number of values, you will get a ValueError :

Unpacking a list/tuple with *

When unpacking a list / tuple you can use the * operator to capture the "leftover" values. This is clearer than slicing the list / tuple ( which in some situations is less readable ). For example, we can extract the first element and then assign the remaining values into a new list without the first element:

We can also extract the values at the beginning and end of the list while grouping all the values in the middle:

We can also use * in deep unpacking:

Unpacking a dictionary

Unpacking a dictionary is a bit different than unpacking a list / tuple . Iteration over dictionaries defaults to the keys . So when unpacking a dict , you can only unpack the keys and not the values :

If you want to unpack the values then you can use the values() method:

If both keys and values are needed, use the items() method. Using items() will generate tuples with key-value pairs. This is because of dict.items() generates an iterable with key-value tuples .

Packing is the ability to group multiple values into one list that is assigned to a variable. This is useful when you want to unpack values, make changes, and then pack the results back into a variable. It also makes it possible to perform merges on 2 or more lists / tuples / dicts .

Packing a list/tuple with *

Packing a list / tuple can be done using the * operator. This will pack all the values into a list / tuple .

Packing a dictionary with **

Packing a dictionary is done by using the ** operator. This will pack all key - value pairs from one dictionary into another dictionary, or combine two dictionaries together.

Usage of * and ** with functions

Packing with function parameters.

When you create a function that accepts an arbitrary number of arguments, you can use *args or **kwargs in the function definition. *args is used to pack an arbitrary number of positional (non-keyworded) arguments and **kwargs is used to pack an arbitrary number of keyword arguments.

Usage of *args :

Usage of **kwargs :

*args and **kwargs can also be used in combination with one another:

You can also write parameters before *args to allow for specific positional arguments. Individual keyword arguments then have to appear before **kwargs .

Arguments have to be structured like this:

def my_function(<positional_args>, *args, <key-word_args>, **kwargs)

If you don't follow this order then you will get an error.

Writing arguments in an incorrect order will result in an error:

Unpacking into function calls

You can use * to unpack a list / tuple of arguments into a function call. This is very useful for functions that don't accept an iterable :

Using * unpacking with the zip() function is another common use case. Since zip() takes multiple iterables and returns a list of tuples with the values from each iterable grouped:

Learn Unpacking And Multiple Assignment

Unlock 3 more exercises to practice unpacking and multiple assignment.

Trey Hunner

I help developers level-up their python skills, multiple assignment and tuple unpacking improve python code readability.

Mar 7 th , 2018 4:30 pm | Comments

Whether I’m teaching new Pythonistas or long-time Python programmers, I frequently find that Python programmers underutilize multiple assignment .

Multiple assignment (also known as tuple unpacking or iterable unpacking) allows you to assign multiple variables at the same time in one line of code. This feature often seems simple after you’ve learned about it, but it can be tricky to recall multiple assignment when you need it most .

In this article we’ll see what multiple assignment is, we’ll take a look at common uses of multiple assignment, and then we’ll look at a few uses for multiple assignment that are often overlooked.

Note that in this article I will be using f-strings which are a Python 3.6+ feature. If you’re on an older version of Python, you’ll need to mentally translate those to use the string format method.

How multiple assignment works

I’ll be using the words multiple assignment , tuple unpacking , and iterable unpacking interchangeably in this article. They’re all just different words for the same thing.

Python’s multiple assignment looks like this:

Here we’re setting x to 10 and y to 20 .

What’s happening at a lower level is that we’re creating a tuple of 10, 20 and then looping over that tuple and taking each of the two items we get from looping and assigning them to x and y in order.

This syntax might make that a bit more clear:

Parenthesis are optional around tuples in Python and they’re also optional in multiple assignment (which uses a tuple-like syntax). All of these are equivalent:

Multiple assignment is often called “tuple unpacking” because it’s frequently used with tuples. But we can use multiple assignment with any iterable, not just tuples. Here we’re using it with a list:

And with a string:

Anything that can be looped over can be “unpacked” with tuple unpacking / multiple assignment.

Here’s another example to demonstrate that multiple assignment works with any number of items and that it works with variables as well as objects we’ve just created:

Note that on that last line we’re actually swapping variable names, which is something multiple assignment allows us to do easily.

Alright, let’s talk about how multiple assignment can be used.

Unpacking in a for loop

You’ll commonly see multiple assignment used in for loops.

Let’s take a dictionary:

Instead of looping over our dictionary like this:

You’ll often see Python programmers use multiple assignment by writing this:

When you write the for X in Y line of a for loop, you’re telling Python that it should do an assignment to X for each iteration of your loop. Just like in an assignment using the = operator, we can use multiple assignment here.

Is essentially the same as this:

We’re just not doing an unnecessary extra assignment in the first example.

So multiple assignment is great for unpacking dictionary items into key-value pairs, but it’s helpful in many other places too.

It’s great when paired with the built-in enumerate function:

And the zip function:

If you’re unfamiliar with enumerate or zip , see my article on looping with indexes in Python .

Newer Pythonistas often see multiple assignment in the context of for loops and sometimes assume it’s tied to loops. Multiple assignment works for any assignment though, not just loop assignments.

An alternative to hard coded indexes

It’s not uncommon to see hard coded indexes (e.g. point[0] , items[1] , vals[-1] ) in code:

When you see Python code that uses hard coded indexes there’s often a way to use multiple assignment to make your code more readable .

Here’s some code that has three hard coded indexes:

We can make this code much more readable by using multiple assignment to assign separate month, day, and year variables:

Whenever you see hard coded indexes in your code, stop to consider whether you could use multiple assignment to make your code more readable.

Multiple assignment is very strict

Multiple assignment is actually fairly strict when it comes to unpacking the iterable we give to it.

If we try to unpack a larger iterable into a smaller number of variables, we’ll get an error:

If we try to unpack a smaller iterable into a larger number of variables, we’ll also get an error:

This strictness is pretty great. If we’re working with an item that has a different size than we expected, the multiple assignment will fail loudly and we’ll hopefully now know about a bug in our program that we weren’t yet aware of.

Let’s look at an example. Imagine that we have a short command line program that parses command-line arguments in a rudimentary way, like this:

Our program is supposed to accept 2 arguments, like this:

But if someone called our program with three arguments, they will not see an error:

There’s no error because we’re not validating that we’ve received exactly 2 arguments.

If we use multiple assignment instead of hard coded indexes, the assignment will verify that we receive exactly the expected number of arguments:

Note : we’re using the variable name _ to note that we don’t care about sys.argv[0] (the name of our program). Using _ for variables you don’t care about is just a convention.

An alternative to slicing

So multiple assignment can be used for avoiding hard coded indexes and it can be used to ensure we’re strict about the size of the tuples/iterables we’re working with.

Multiple assignment can be used to replace hard coded slices too!

Slicing is a handy way to grab a specific portion of the items in lists and other sequences.

Here are some slices that are “hard coded” in that they only use numeric indexes:

Whenever you see slices that don’t use any variables in their slice indexes, you can often use multiple assignment instead. To do this we have to talk about a feature that I haven’t mentioned yet: the * operator.

In Python 3.0, the * operator was added to the multiple assignment syntax, allowing us to capture remaining items after an unpacking into a list:

The * operator allows us to replace hard coded slices near the ends of sequences.

These two lines are equivalent:

These two lines are equivalent also:

With the * operator and multiple assignment you can replace things like this:

With more descriptive code, like this:

So if you see hard coded slice indexes in your code, consider whether you could use multiple assignment to clarify what those slices really represent.

Deep unpacking

This next feature is something that long-time Python programmers often overlook. It doesn’t come up quite as often as the other uses for multiple assignment that I’ve discussed, but it can be very handy to know about when you do need it.

We’ve seen multiple assignment for unpacking tuples and other iterables. We haven’t yet seen that this is can be done deeply .

I’d say that the following multiple assignment is shallow because it unpacks one level deep:

And I’d say that this multiple assignment is deep because it unpacks the previous point tuple further into x , y , and z variables:

If it seems confusing what’s going on above, maybe using parenthesis consistently on both sides of this assignment will help clarify things:

We’re unpacking one level deep to get two objects, but then we take the second object and unpack it also to get 3 more objects. Then we assign our first object and our thrice-unpacked second object to our new variables ( color , x , y , and z ).

Take these two lists:

Here’s an example of code that works with these lists by using shallow unpacking:

And here’s the same thing with deeper unpacking:

Note that in this second case, it’s much more clear what type of objects we’re working with. The deep unpacking makes it apparent that we’re receiving two 2-itemed tuples each time we loop.

Deep unpacking often comes up when nesting looping utilities that each provide multiple items. For example, you may see deep multiple assignments when using enumerate and zip together:

I said before that multiple assignment is strict about the size of our iterables as we unpack them. With deep unpacking we can also be strict about the shape of our iterables .

This works:

But this buggy code works too:

Whereas this works:

But this does not:

With multiple assignment we’re assigning variables while also making particular assertions about the size and shape of our iterables. Multiple assignment will help you clarify your code to both humans (for better code readability ) and to computers (for improved code correctness ).

Using a list-like syntax

I noted before that multiple assignment uses a tuple-like syntax, but it works on any iterable. That tuple-like syntax is the reason it’s commonly called “tuple unpacking” even though it might be more clear to say “iterable unpacking”.

I didn’t mention before that multiple assignment also works with a list-like syntax .

Here’s a multiple assignment with a list-like syntax:

This might seem really strange. What’s the point of allowing both list-like and tuple-like syntaxes?

I use this feature rarely, but I find it helpful for code clarity in specific circumstances.

Let’s say I have code that used to look like this:

And our well-intentioned coworker has decided to use deep multiple assignment to refactor our code to this:

See that trailing comma on the left-hand side of the assignment? It’s easy to miss and it makes this code look sort of weird. What is that comma even doing in this code?

That trailing comma is there to make a single item tuple. We’re doing deep unpacking here.

Here’s another way we could write the same code:

This might make that deep unpacking a little more obvious but I’d prefer to see this instead:

The list-syntax in our assignment makes it more clear that we’re unpacking a one-item iterable and then unpacking that single item into value and times_seen variables.

When I see this, I also think I bet we’re unpacking a single-item list . And that is in fact what we’re doing. We’re using a Counter object from the collections module here. The most_common method on Counter objects allows us to limit the length of the list returned to us. We’re limiting the list we’re getting back to just a single item.

When you’re unpacking structures that often hold lots of values (like lists) and structures that often hold a very specific number of values (like tuples) you may decide that your code appears more semantically accurate if you use a list-like syntax when unpacking those list-like structures.

If you’d like you might even decide to adopt a convention of always using a list-like syntax when unpacking list-like structures (frequently the case when using * in multiple assignment):

I don’t usually use this convention myself, mostly because I’m just not in the habit of using it. But if you find it helpful, you might consider using this convention in your own code.

When using multiple assignment in your code, consider when and where a list-like syntax might make your code more descriptive and more clear. This can sometimes improve readability.

Don’t forget about multiple assignment

Multiple assignment can improve both the readability of your code and the correctness of your code. It can make your code more descriptive while also making implicit assertions about the size and shape of the iterables you’re unpacking.

The use for multiple assignment that I often see forgotten is its ability to replace hard coded indexes , including replacing hard coded slices (using the * syntax). It’s also common to overlook the fact that multiple assignment works deeply and can be used with both a tuple-like syntax and a list-like syntax.

It’s tricky to recognize and remember all the cases that multiple assignment can come in handy. Please feel free to use this article as your personal reference guide to multiple assignment.

Get practice with multiple assignment

You don’t learn by reading articles like this one, you learn by writing code .

To get practice writing some readable code using tuple unpacking, sign up for Python Morsels using the form below. If you sign up to Python Morsels using this form, I’ll immediately send you an exercise that involves tuple unpacking.

Intro to Python courses often skip over some fundamental Python concepts .

Sign up below and I'll share ideas new Pythonistas often overlook .

You're nearly signed up. You just need to check your email and click the link there to set your password .

Right after you've set your password you'll receive your first Python Morsels exercise.

Currently Reading :

Currently reading:

Assign Values To Variables Direct Initialisation Method

How to assign values to variables in python.

Author image

Harsh Pandey

Software Developer

Published on  Tue Mar 19 2024

Assigning values in Python to variables is a fundamental and straightforward process. This action forms the basis for storing and manipulating Python data. The various methods for assigning values to variables in Python are given below.

Assigning values to variables in Python using the Direct Initialization Method involves a simple, single-line statement. This method is essential for efficiently setting up variables with initial values.

To use this method, you directly assign the desired value to a variable. The format follows variable_name = value . For instance, age = 21 assigns the integer 21 to the variable age.

This method is not just limited to integers. For example, assigning a string to a variable would be name = "Alice" . An example of this implementation in Python is.

Python Variables – Assign Multiple Values

In Python, assigning multiple values to variables can be done in a single, efficient line of code. This feature streamlines the initializing of several variables at once, making the code more concise and readable.

Python allows you to assign values to multiple variables simultaneously by separating each variable and value with commas. For example, x , y, z = 1 , 2 , 3 simultaneously assigns 1 to x, 2 to y, and 3 to z.

Additionally, Python supports unpacking a collection of values into variables. For example, if you have a list values = [ 1 , 2 , 3 ] , you can assign these values to a, b, c by writing a , b , c = values .

Python’s capability to assign multiple values to multiple variables in a single line enhances code efficiency and clarity. This feature is applicable across various data types and includes unpacking collections into multiple variables, as shown in the examples.

Assign Values To Variables Using Conditional Operator

Assigning values to variables in Python using a conditional operator allows for more dynamic and flexible value assignments based on certain conditions. This method employs the ternary operator, a concise way to assign values based on a condition's truth value.

The syntax for using the conditional operator in Python follows the pattern: variable = value_if_ true if condition else value_if_ false . For example, status = 'Adult' if age >= 18 else 'Minor' assigns 'Adult' to status if age is 18 or more, and 'Minor' otherwise.

This method can also be used with more complex conditions and various data types. For example, you can assign different strings to a variable based on a numerical comparison

Using the conditional operator for variable assignment in Python enables more nuanced and condition-dependent variable initialization. It is particularly useful for creating readable one-liners that eliminate the need for longer if-else statements, as illustrated in the examples.

Python One Liner Conditional Statement Assigning

Python allows for one-liner conditional statements to assign values to variables, providing a compact and efficient way of handling conditional assignments. This approach utilizes the ternary operator for conditional expressions in a single line of code.

The ternary operator syntax in Python is variable = value_if_ true if condition else value_if_ false . For instance, message = 'High' if temperature > 20 else 'Low' assigns 'High' to message if temperature is greater than 20, and 'Low' otherwise.

This method can also be applied to more complex conditions.

Using one-liner conditional statements in Python for variable assignment streamlines the process, especially when dealing with simple conditions. It replaces the need for multi-line if-else statements, making the code more concise and readable.

Harsh Pandey

About the author

Software Developer adept in crafting efficient code and solving complex problems. Passionate about technology and continuous learning.

Browse Flexiple's talent pool

Explore our network of top tech talent. Find the perfect match for your project.

  • Programmers
  • React Native
  • Ruby on Rails

Multiple Variable Assignment in JavaScript

  • JavaScript Howtos
  • Multiple Variable Assignment in …

Use the = Operator to Assign Multiple Variable in JavaScript

Multiple variable assignment using destructuring assignment with fill() function in javascript.

Multiple Variable Assignment in JavaScript

This tutorial explains multiple variable assignments in JavaScript because variables are the most important part of our coding.

Sometimes, we have to do many variable declarations and assignments as they have the same value. How? Let’s understand.

Assume we have variable1 , variable2 , and variable3 and want all three variables to have a value of 1 .

They seem equivalent, but they are not. The reason is variables’ scope and assignment precedence .

The assignment operator is right-associative in JavaScript, which means it parses the left-most after parsing the right-most.

Let’s have another example to understand variable scope and assignment precedence .

Focus on the code and see that variable1 , variable2 , and variable3 are in function scope and local to the test1() .

They are not available outside of test1() method that’s why returning undefined . Here, var variable1 = 1, variable2 = 1, varialbe3 = 1; is equivalent to var variable1 = 1; var variable2 = 1; var varialbe3 = 1; .

Now, observe the test2() function. The variable1 is in function scope due to the var keyword, but variable2 and variable3 are leaking because they are not written with the var keyword.

They are globally accessible outside the test2() function. Remember that the variable declarations are hoisted only.

However, the precedence is right-associative which means var variable1 = (window.variable2 =(window.variable3 = 1)); .

Which Means the variable3 will be assigned to 1 first, then the value of variable3 will be allocated to variable2 , and lastly, the value of variable2 will be assigned to variable1 .

To avoid a variable leak in test2() , we can split the variable declaration and assignment into two separate lines. In this way, we can restrict variable1 , variable2 , and variable3 to test2() function scope.

The destructuring assignment helps in assigning multiple variables with the same value without leaking them outside the function.

The fill() method updates all array elements with a static value and returns the modified array. You can read more about fill() here .

Mehvish Ashiq avatar

Mehvish Ashiq is a former Java Programmer and a Data Science enthusiast who leverages her expertise to help others to learn and grow by creating interesting, useful, and reader-friendly content in Computer Programming, Data Science, and Technology.

Related Article - JavaScript Variable

  • How to Access the Session Variable in JavaScript
  • How to Check Undefined and Null Variable in JavaScript
  • How to Mask Variable Value in JavaScript
  • Why Global Variables Give Undefined Values in JavaScript
  • How to Declare Multiple Variables in a Single Line in JavaScript
  • How to Declare Multiple Variables in JavaScript
  • Skip to main content
  • Skip to search
  • Skip to select language
  • Sign up for free

Assignment (=)

The assignment ( = ) operator is used to assign a value to a variable or property. The assignment expression itself has a value, which is the assigned value. This allows multiple assignments to be chained in order to assign a single value to multiple variables.

A valid assignment target, including an identifier or a property accessor . It can also be a destructuring assignment pattern .

An expression specifying the value to be assigned to x .

Return value

The value of y .

Thrown in strict mode if assigning to an identifier that is not declared in the scope.

Thrown in strict mode if assigning to a property that is not modifiable .

Description

The assignment operator is completely different from the equals ( = ) sign used as syntactic separators in other locations, which include:

  • Initializers of var , let , and const declarations
  • Default values of destructuring
  • Default parameters
  • Initializers of class fields

All these places accept an assignment expression on the right-hand side of the = , so if you have multiple equals signs chained together:

This is equivalent to:

Which means y must be a pre-existing variable, and x is a newly declared const variable. y is assigned the value 5 , and x is initialized with the value of the y = 5 expression, which is also 5 . If y is not a pre-existing variable, a global variable y is implicitly created in non-strict mode , or a ReferenceError is thrown in strict mode. To declare two variables within the same declaration, use:

Simple assignment and chaining

Value of assignment expressions.

The assignment expression itself evaluates to the value of the right-hand side, so you can log the value and assign to a variable at the same time.

Unqualified identifier assignment

The global object sits at the top of the scope chain. When attempting to resolve a name to a value, the scope chain is searched. This means that properties on the global object are conveniently visible from every scope, without having to qualify the names with globalThis. or window. or global. .

Because the global object has a String property ( Object.hasOwn(globalThis, "String") ), you can use the following code:

So the global object will ultimately be searched for unqualified identifiers. You don't have to type globalThis.String ; you can just type the unqualified String . To make this feature more conceptually consistent, assignment to unqualified identifiers will assume you want to create a property with that name on the global object (with globalThis. omitted), if there is no variable of the same name declared in the scope chain.

In strict mode , assignment to an unqualified identifier in strict mode will result in a ReferenceError , to avoid the accidental creation of properties on the global object.

Note that the implication of the above is that, contrary to popular misinformation, JavaScript does not have implicit or undeclared variables. It just conflates the global object with the global scope and allows omitting the global object qualifier during property creation.

Assignment with destructuring

The left-hand side of can also be an assignment pattern. This allows assigning to multiple variables at once.

For more information, see Destructuring assignment .

Specifications

Browser compatibility.

BCD tables only load in the browser with JavaScript enabled. Enable JavaScript to view data.

  • Assignment operators in the JS guide
  • Destructuring assignment

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

about_Assignment_Operators

  • 2 contributors

Short description

Describes how to use operators to assign values to variables.

Long description

Assignment operators assign one or more values to a variable. The equals sign ( = ) is the PowerShell assignment operator. PowerShell also has the following compound assignment operators: += , -= , *= , %= , ++ , -- , ??= . Compound assignment operators perform operations on the values before the assignment.

The syntax of the assignment operators is as follows:

  • <assignable-expression> <assignment-operator> <value>

Assignable expressions include variables and properties. The value can be a single value, an array of values, or a command, expression, or statement.

The increment and decrement operators are unary operators. Each has prefix and postfix versions.

  • <assignable-expression><operator>
  • <operator><assignable-expression>

The value of the assignable expression must be a number or it must be convertible to a number.

Using the assignment operator

Variables are named memory spaces that store values. You store the values in variables using the assignment operator = . The new value can replace the existing value of the variable, or you can append a new value to the existing value. For example, the following statement assigns the value PowerShell to the $MyShell variable:

When you assign a value to a variable in PowerShell, the variable is created if it didn't already exist. For example, the first of the following two assignment statements creates the $a variable and assigns a value of 6 to $a . The second assignment statement assigns a value of 12 to $a . The first statement creates a new variable. The second statement changes only its value:

Variables in PowerShell don't have a specific data type unless you cast them. When a variable contains only one object, the variable takes the data type of that object. When a variable contains a collection of objects, the variable has the System.Object data type. Therefore, you can assign any type of object to the collection. The following example shows that you can add process objects, service objects, strings, and integers to a variable without generating an error:

Because the assignment operator = has a lower precedence than the pipeline operator | , parentheses aren't required to assign the result of a command pipeline to a variable. For example, the following command sorts the services on the computer and then assigns the sorted services to the $a variable:

You can also assign the value created by a statement to a variable, as in the following example:

This example assigns zero to the $a variable if the value of $b is less than zero. It assigns the value of $b to $a if the value of $b isn't less than zero.

To assign an array (multiple values) to a variable, separate the values with commas, as follows:

To assign a hash table to a variable, use the standard hash table notation in PowerShell. Type an at sign @ followed by key/value pairs that are separated by semicolons ; and enclosed in braces { } . For example, to assign a hashtable to the $a variable, type:

To assign hexadecimal values to a variable, precede the value with 0x . PowerShell converts the hexadecimal value (0x10) to a decimal value (in this case, 16) and assigns that value to the $a variable. For example, to assign a value of 0x10 to the $a variable, type:

To assign an exponential value to a variable, type the root number, the letter e , and a number that represents a multiple of 10. For example, to assign a value of 3.1415 to the power of 1,000 to the $a variable, type:

PowerShell can also convert kilobytes KB , megabytes MB , and gigabytes GB into bytes. For example, to assign a value of 10 kilobytes to the $a variable, type:

Using compound assignment operators

Compound assignment operators perform numeric operations on the values before the assignment.

Compound assignment operators don't use dynamic scoping. The variable is always in the current scope.

In the following example, the variable $x is defined in the global scope. The braces create a new scope. The variable $x inside the braces is a new instance and not a copy of the global variable.

When you use the regular assignment operator, you get a copy of the variable from the parent scope. But notice that $x in the parent scope is not changed.

The assignment by addition operator

The assignment by addition operator += either increments the value of a variable or appends the specified value to the existing value. The action depends on whether the variable has a numeric or string type and whether the variable contains a single value (a scalar) or multiple values (a collection).

The += operator combines two operations. First, it adds, and then it assigns. Therefore, the following statements are equivalent:

When the variable contains a single numeric value, the += operator increments the existing value by the amount on the right side of the operator. Then, the operator assigns the resulting value to the variable. The following example shows how to use the += operator to increase the value of a variable:

When the value of the variable is a string, the value on the right side of the operator is appended to the string, as follows:

When the value of the variable is an array, the += operator appends the values on the right side of the operator to the array. Unless the array is explicitly typed by casting, you can append any type of value to the array, as follows:

When the value of a variable is a hash table, the += operator appends the value on the right side of the operator to the hash table. However, because the only type that you can add to a hash table is another hash table, all other assignments fail.

For example, the following command assigns a hash table to the $a variable. Then, it uses the += operator to append another hash table to the existing hash table, effectively adding a new key-value pair to the existing hash table. This command succeeds, as shown in the output:

The following command attempts to append an integer "1" to the hash table in the $a variable. This command fails:

The assignment by subtraction operator

The assignment by subtraction operator -= decrements the value of a variable by the value that's specified on the right side of the operator. This operator can't be used with string variables, and it can't be used to remove an element from a collection.

The -= operator combines two operations. First, it subtracts, and then it assigns. Therefore, the following statements are equivalent:

The following example shows how to use of the -= operator to decrease the value of a variable:

You can also use the -= assignment operator to decrease the value of a member of a numeric array. To do this, specify the index of the array element that you want to change. In the following example, the value of the third element of an array (element 2) is decreased by 1:

You can't use the -= operator to delete the values of a variable. To delete all the values that are assigned to a variable, use the Clear-Item or Clear-Variable cmdlets to assign a value of $null or "" to the variable.

To delete a particular value from an array, use array notation to assign a value of $null to the particular item. For example, the following statement deletes the second value (index position 1) from an array:

To delete a variable, use the Remove-Variable cmdlet. This method is useful when the variable is explicitly cast to a particular data type, and you want an untyped variable. The following command deletes the $a variable:

The assignment by multiplication operator

The assignment by multiplication operator *= multiplies a numeric value or appends the specified number of copies of the string value of a variable.

When a variable contains a single numeric value, that value is multiplied by the value on the right side of the operator. For example, the following example shows how to use the *= operator to multiply the value of a variable:

In this case, the *= operator combines two operations. First, it multiplies, and then it assigns. Therefore, the following statements are equivalent:

When a variable contains a string value, PowerShell appends the specified number of strings to the value, as follows:

To multiply an element of an array, use an index to identify the element that you want to multiply. For example, the following command multiplies the first element in the array (index position 0) by 2:

The assignment by division operator

The assignment by division operator /= divides a numeric value by the value that's specified on the right side of the operator. The operator can't be used with string variables.

The /= operator combines two operations. First, it divides, and then it assigns. Therefore, the following two statements are equivalent:

For example, the following command uses the /= operator to divide the value of a variable:

To divide an element of an array, use an index to identify the element that you want to change. For example, the following command divides the second element in the array (index position 1) by 2:

The assignment by modulus operator

The assignment by modulus operator %= divides the value of a variable by the value on the right side of the operator. Then, the %= operator assigns the remainder (known as the modulus) to the variable. You can use this operator only when a variable contains a single numeric value. You can't use this operator when a variable contains a string variable or an array.

The %= operator combines two operations. First, it divides and determines the remainder, and then it assigns the remainder to the variable. Therefore, the following statements are equivalent:

The following example shows how to use the %= operator to save the modulus of a quotient:

The increment and decrement operators

The increment operator ++ increases the value of a variable by 1. When you use the increment operator in a simple statement, no value is returned. To view the result, display the value of the variable, as follows:

To force a value to be returned, enclose the variable and the operator in parentheses, as follows:

The increment operator can be placed before (prefix) or after (postfix) a variable. The prefix version of the operator increments a variable before its value is used in the statement, as follows:

The postfix version of the operator increments a variable after its value is used in the statement. In the following example, the $c and $a variables have different values because the value is assigned to $c before $a changes:

The decrement operator -- decreases the value of a variable by 1. As with the increment operator, no value is returned when you use the operator in a simple statement. Use parentheses to return a value, as follows:

The prefix version of the operator decrements a variable before its value is used in the statement, as follows:

The postfix version of the operator decrements a variable after its value is used in the statement. In the following example, the $d and $a variables have different values because the value is assigned to $d before $a changes:

Null-coalescing assignment operator

The null-coalescing assignment operator ??= assigns the value of its right-hand operand to its left-hand operand only if the left-hand operand evaluates to null. The ??= operator doesn't evaluate its right-hand operand if the left-hand operand evaluates to non-null.

For more information, see Null-coalescing operator .

Microsoft .NET types

By default, when a variable has only one value, the value that's assigned to the variable determines the data type of the variable. For example, the following command creates a variable that has the System.Int32 type:

To find the .NET type of a variable, use the GetType method and its FullName property. Be sure to include the parentheses after the GetType method name, even though the method call has no arguments:

To create a variable that contains a string, assign a string value to the variable. To indicate that the value is a string, enclose it in quotation marks, as follows:

If the first value that's assigned to the variable is a string, PowerShell treats all operations as string operations and casts new values to strings. This occurs in the following example:

If the first value is an integer, PowerShell treats all operations as integer operations and casts new values to integers. This occurs in the following example:

You can cast a new scalar variable as any .NET type by placing the type name in brackets that precede either the variable name or the first assignment value. When you cast a variable, you are defining the type of data that can be stored in the variable.

For example, the following command casts the variable as a string type:

The following example casts the first value, instead of casting the variable:

You can't recast the data type of an existing variable if its value can't be converted to the new data type.

To change the data type, you must replace its value, as follows:

In addition, when you precede a variable name with a data type, the type of that variable is locked unless you explicitly override the type by specifying another data type. If you try to assign a value that's incompatible with the existing type, and you don't explicitly override the type, PowerShell displays an error, as shown in the following example:

In PowerShell, the data types of variables that contain multiple items in an array are handled differently from the data types of variables that contain a single item. Unless a data type is specifically assigned to an array variable, the data type is always System.Object [] . This data type is specific to arrays.

Sometimes, you can override the default type by specifying another type. For example, the following command casts the variable as a string [] array type:

PowerShell variables can be any .NET data type. In addition, you can assign any fully qualified .NET data type that's available in the current process. For example, the following command specifies a System.DateTime data type:

The variable will be assigned a value that conforms to the System.DateTime data type. The value of the $a variable would be the following:

Assigning multiple variables

In PowerShell, you can assign values to multiple variables using a single command. The first element of the assignment value is assigned to the first variable, the second element is assigned to the second variable, the third element to the third variable. This is known as multiple assignment .

For example, the following command assigns the value 1 to the $a variable, the value 2 to the $b variable, and the value 3 to the $c variable:

If the assignment value contains more elements than variables, all the remaining values are assigned to the last variable. For example, the following command contains three variables and five values:

Therefore, PowerShell assigns the value 1 to the $a variable and the value 2 to the $b variable. It assigns the values 3, 4, and 5 to the $c variable. To assign the values in the $c variable to three other variables, use the following format:

This command assigns the value 3 to the $d variable, the value 4 to the $e variable, and the value 5 to the $f variable.

If the assignment value contains fewer elements than variables, the remaining variables are assigned the value $null . For example, the following command contains three variables and two values:

Therefore, PowerShell assigns the value 1 to the $a variable and the value 2 to the $b variable. The $c variable is $null .

You can also assign a single value to multiple variables by chaining the variables. For example, the following command assigns a value of "three" to all four variables:

Variable-related cmdlets

In addition to using an assignment operation to set a variable value, you can also use the Set-Variable cmdlet. For example, the following command uses Set-Variable to assign an array of 1, 2, 3 to the $a variable.

  • about_Arrays
  • about_Hash_Tables
  • about_Variables
  • Clear-Variable
  • Remove-Variable
  • Set-Variable

Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: https://aka.ms/ContentUserFeedback .

Submit and view feedback for

Additional resources

  • Python »
  • 3.14.0a0 Documentation »
  • The Python Tutorial »
  • Theme Auto Light Dark |

9. Classes ¶

Classes provide a means of bundling data and functionality together. Creating a new class creates a new type of object, allowing new instances of that type to be made. Each class instance can have attributes attached to it for maintaining its state. Class instances can also have methods (defined by its class) for modifying its state.

Compared with other programming languages, Python’s class mechanism adds classes with a minimum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ and Modula-3. Python classes provide all the standard features of Object Oriented Programming: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, and a method can call the method of a base class with the same name. Objects can contain arbitrary amounts and kinds of data. As is true for modules, classes partake of the dynamic nature of Python: they are created at runtime, and can be modified further after creation.

In C++ terminology, normally class members (including the data members) are public (except see below Private Variables ), and all member functions are virtual . As in Modula-3, there are no shorthands for referencing the object’s members from its methods: the method function is declared with an explicit first argument representing the object, which is provided implicitly by the call. As in Smalltalk, classes themselves are objects. This provides semantics for importing and renaming. Unlike C++ and Modula-3, built-in types can be used as base classes for extension by the user. Also, like in C++, most built-in operators with special syntax (arithmetic operators, subscripting etc.) can be redefined for class instances.

(Lacking universally accepted terminology to talk about classes, I will make occasional use of Smalltalk and C++ terms. I would use Modula-3 terms, since its object-oriented semantics are closer to those of Python than C++, but I expect that few readers have heard of it.)

9.1. A Word About Names and Objects ¶

Objects have individuality, and multiple names (in multiple scopes) can be bound to the same object. This is known as aliasing in other languages. This is usually not appreciated on a first glance at Python, and can be safely ignored when dealing with immutable basic types (numbers, strings, tuples). However, aliasing has a possibly surprising effect on the semantics of Python code involving mutable objects such as lists, dictionaries, and most other types. This is usually used to the benefit of the program, since aliases behave like pointers in some respects. For example, passing an object is cheap since only a pointer is passed by the implementation; and if a function modifies an object passed as an argument, the caller will see the change — this eliminates the need for two different argument passing mechanisms as in Pascal.

9.2. Python Scopes and Namespaces ¶

Before introducing classes, I first have to tell you something about Python’s scope rules. Class definitions play some neat tricks with namespaces, and you need to know how scopes and namespaces work to fully understand what’s going on. Incidentally, knowledge about this subject is useful for any advanced Python programmer.

Let’s begin with some definitions.

A namespace is a mapping from names to objects. Most namespaces are currently implemented as Python dictionaries, but that’s normally not noticeable in any way (except for performance), and it may change in the future. Examples of namespaces are: the set of built-in names (containing functions such as abs() , and built-in exception names); the global names in a module; and the local names in a function invocation. In a sense the set of attributes of an object also form a namespace. The important thing to know about namespaces is that there is absolutely no relation between names in different namespaces; for instance, two different modules may both define a function maximize without confusion — users of the modules must prefix it with the module name.

By the way, I use the word attribute for any name following a dot — for example, in the expression z.real , real is an attribute of the object z . Strictly speaking, references to names in modules are attribute references: in the expression modname.funcname , modname is a module object and funcname is an attribute of it. In this case there happens to be a straightforward mapping between the module’s attributes and the global names defined in the module: they share the same namespace! [ 1 ]

Attributes may be read-only or writable. In the latter case, assignment to attributes is possible. Module attributes are writable: you can write modname.the_answer = 42 . Writable attributes may also be deleted with the del statement. For example, del modname.the_answer will remove the attribute the_answer from the object named by modname .

Namespaces are created at different moments and have different lifetimes. The namespace containing the built-in names is created when the Python interpreter starts up, and is never deleted. The global namespace for a module is created when the module definition is read in; normally, module namespaces also last until the interpreter quits. The statements executed by the top-level invocation of the interpreter, either read from a script file or interactively, are considered part of a module called __main__ , so they have their own global namespace. (The built-in names actually also live in a module; this is called builtins .)

The local namespace for a function is created when the function is called, and deleted when the function returns or raises an exception that is not handled within the function. (Actually, forgetting would be a better way to describe what actually happens.) Of course, recursive invocations each have their own local namespace.

A scope is a textual region of a Python program where a namespace is directly accessible. “Directly accessible” here means that an unqualified reference to a name attempts to find the name in the namespace.

Although scopes are determined statically, they are used dynamically. At any time during execution, there are 3 or 4 nested scopes whose namespaces are directly accessible:

the innermost scope, which is searched first, contains the local names

the scopes of any enclosing functions, which are searched starting with the nearest enclosing scope, contain non-local, but also non-global names

the next-to-last scope contains the current module’s global names

the outermost scope (searched last) is the namespace containing built-in names

If a name is declared global, then all references and assignments go directly to the next-to-last scope containing the module’s global names. To rebind variables found outside of the innermost scope, the nonlocal statement can be used; if not declared nonlocal, those variables are read-only (an attempt to write to such a variable will simply create a new local variable in the innermost scope, leaving the identically named outer variable unchanged).

Usually, the local scope references the local names of the (textually) current function. Outside functions, the local scope references the same namespace as the global scope: the module’s namespace. Class definitions place yet another namespace in the local scope.

It is important to realize that scopes are determined textually: the global scope of a function defined in a module is that module’s namespace, no matter from where or by what alias the function is called. On the other hand, the actual search for names is done dynamically, at run time — however, the language definition is evolving towards static name resolution, at “compile” time, so don’t rely on dynamic name resolution! (In fact, local variables are already determined statically.)

A special quirk of Python is that – if no global or nonlocal statement is in effect – assignments to names always go into the innermost scope. Assignments do not copy data — they just bind names to objects. The same is true for deletions: the statement del x removes the binding of x from the namespace referenced by the local scope. In fact, all operations that introduce new names use the local scope: in particular, import statements and function definitions bind the module or function name in the local scope.

The global statement can be used to indicate that particular variables live in the global scope and should be rebound there; the nonlocal statement indicates that particular variables live in an enclosing scope and should be rebound there.

9.2.1. Scopes and Namespaces Example ¶

This is an example demonstrating how to reference the different scopes and namespaces, and how global and nonlocal affect variable binding:

The output of the example code is:

Note how the local assignment (which is default) didn’t change scope_test 's binding of spam . The nonlocal assignment changed scope_test 's binding of spam , and the global assignment changed the module-level binding.

You can also see that there was no previous binding for spam before the global assignment.

9.3. A First Look at Classes ¶

Classes introduce a little bit of new syntax, three new object types, and some new semantics.

9.3.1. Class Definition Syntax ¶

The simplest form of class definition looks like this:

Class definitions, like function definitions ( def statements) must be executed before they have any effect. (You could conceivably place a class definition in a branch of an if statement, or inside a function.)

In practice, the statements inside a class definition will usually be function definitions, but other statements are allowed, and sometimes useful — we’ll come back to this later. The function definitions inside a class normally have a peculiar form of argument list, dictated by the calling conventions for methods — again, this is explained later.

When a class definition is entered, a new namespace is created, and used as the local scope — thus, all assignments to local variables go into this new namespace. In particular, function definitions bind the name of the new function here.

When a class definition is left normally (via the end), a class object is created. This is basically a wrapper around the contents of the namespace created by the class definition; we’ll learn more about class objects in the next section. The original local scope (the one in effect just before the class definition was entered) is reinstated, and the class object is bound here to the class name given in the class definition header ( ClassName in the example).

9.3.2. Class Objects ¶

Class objects support two kinds of operations: attribute references and instantiation.

Attribute references use the standard syntax used for all attribute references in Python: obj.name . Valid attribute names are all the names that were in the class’s namespace when the class object was created. So, if the class definition looked like this:

then MyClass.i and MyClass.f are valid attribute references, returning an integer and a function object, respectively. Class attributes can also be assigned to, so you can change the value of MyClass.i by assignment. __doc__ is also a valid attribute, returning the docstring belonging to the class: "A simple example class" .

Class instantiation uses function notation. Just pretend that the class object is a parameterless function that returns a new instance of the class. For example (assuming the above class):

creates a new instance of the class and assigns this object to the local variable x .

The instantiation operation (“calling” a class object) creates an empty object. Many classes like to create objects with instances customized to a specific initial state. Therefore a class may define a special method named __init__() , like this:

When a class defines an __init__() method, class instantiation automatically invokes __init__() for the newly created class instance. So in this example, a new, initialized instance can be obtained by:

Of course, the __init__() method may have arguments for greater flexibility. In that case, arguments given to the class instantiation operator are passed on to __init__() . For example,

9.3.3. Instance Objects ¶

Now what can we do with instance objects? The only operations understood by instance objects are attribute references. There are two kinds of valid attribute names: data attributes and methods.

data attributes correspond to “instance variables” in Smalltalk, and to “data members” in C++. Data attributes need not be declared; like local variables, they spring into existence when they are first assigned to. For example, if x is the instance of MyClass created above, the following piece of code will print the value 16 , without leaving a trace:

The other kind of instance attribute reference is a method . A method is a function that “belongs to” an object. (In Python, the term method is not unique to class instances: other object types can have methods as well. For example, list objects have methods called append, insert, remove, sort, and so on. However, in the following discussion, we’ll use the term method exclusively to mean methods of class instance objects, unless explicitly stated otherwise.)

Valid method names of an instance object depend on its class. By definition, all attributes of a class that are function objects define corresponding methods of its instances. So in our example, x.f is a valid method reference, since MyClass.f is a function, but x.i is not, since MyClass.i is not. But x.f is not the same thing as MyClass.f — it is a method object , not a function object.

9.3.4. Method Objects ¶

Usually, a method is called right after it is bound:

In the MyClass example, this will return the string 'hello world' . However, it is not necessary to call a method right away: x.f is a method object, and can be stored away and called at a later time. For example:

will continue to print hello world until the end of time.

What exactly happens when a method is called? You may have noticed that x.f() was called without an argument above, even though the function definition for f() specified an argument. What happened to the argument? Surely Python raises an exception when a function that requires an argument is called without any — even if the argument isn’t actually used…

Actually, you may have guessed the answer: the special thing about methods is that the instance object is passed as the first argument of the function. In our example, the call x.f() is exactly equivalent to MyClass.f(x) . In general, calling a method with a list of n arguments is equivalent to calling the corresponding function with an argument list that is created by inserting the method’s instance object before the first argument.

In general, methods work as follows. When a non-data attribute of an instance is referenced, the instance’s class is searched. If the name denotes a valid class attribute that is a function object, references to both the instance object and the function object are packed into a method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.

9.3.5. Class and Instance Variables ¶

Generally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class:

As discussed in A Word About Names and Objects , shared data can have possibly surprising effects with involving mutable objects such as lists and dictionaries. For example, the tricks list in the following code should not be used as a class variable because just a single list would be shared by all Dog instances:

Correct design of the class should use an instance variable instead:

9.4. Random Remarks ¶

If the same attribute name occurs in both an instance and in a class, then attribute lookup prioritizes the instance:

Data attributes may be referenced by methods as well as by ordinary users (“clients”) of an object. In other words, classes are not usable to implement pure abstract data types. In fact, nothing in Python makes it possible to enforce data hiding — it is all based upon convention. (On the other hand, the Python implementation, written in C, can completely hide implementation details and control access to an object if necessary; this can be used by extensions to Python written in C.)

Clients should use data attributes with care — clients may mess up invariants maintained by the methods by stamping on their data attributes. Note that clients may add data attributes of their own to an instance object without affecting the validity of the methods, as long as name conflicts are avoided — again, a naming convention can save a lot of headaches here.

There is no shorthand for referencing data attributes (or other methods!) from within methods. I find that this actually increases the readability of methods: there is no chance of confusing local variables and instance variables when glancing through a method.

Often, the first argument of a method is called self . This is nothing more than a convention: the name self has absolutely no special meaning to Python. Note, however, that by not following the convention your code may be less readable to other Python programmers, and it is also conceivable that a class browser program might be written that relies upon such a convention.

Any function object that is a class attribute defines a method for instances of that class. It is not necessary that the function definition is textually enclosed in the class definition: assigning a function object to a local variable in the class is also ok. For example:

Now f , g and h are all attributes of class C that refer to function objects, and consequently they are all methods of instances of C — h being exactly equivalent to g . Note that this practice usually only serves to confuse the reader of a program.

Methods may call other methods by using method attributes of the self argument:

Methods may reference global names in the same way as ordinary functions. The global scope associated with a method is the module containing its definition. (A class is never used as a global scope.) While one rarely encounters a good reason for using global data in a method, there are many legitimate uses of the global scope: for one thing, functions and modules imported into the global scope can be used by methods, as well as functions and classes defined in it. Usually, the class containing the method is itself defined in this global scope, and in the next section we’ll find some good reasons why a method would want to reference its own class.

Each value is an object, and therefore has a class (also called its type ). It is stored as object.__class__ .

9.5. Inheritance ¶

Of course, a language feature would not be worthy of the name “class” without supporting inheritance. The syntax for a derived class definition looks like this:

The name BaseClassName must be defined in a namespace accessible from the scope containing the derived class definition. In place of a base class name, other arbitrary expressions are also allowed. This can be useful, for example, when the base class is defined in another module:

Execution of a derived class definition proceeds the same as for a base class. When the class object is constructed, the base class is remembered. This is used for resolving attribute references: if a requested attribute is not found in the class, the search proceeds to look in the base class. This rule is applied recursively if the base class itself is derived from some other class.

There’s nothing special about instantiation of derived classes: DerivedClassName() creates a new instance of the class. Method references are resolved as follows: the corresponding class attribute is searched, descending down the chain of base classes if necessary, and the method reference is valid if this yields a function object.

Derived classes may override methods of their base classes. Because methods have no special privileges when calling other methods of the same object, a method of a base class that calls another method defined in the same base class may end up calling a method of a derived class that overrides it. (For C++ programmers: all methods in Python are effectively virtual .)

An overriding method in a derived class may in fact want to extend rather than simply replace the base class method of the same name. There is a simple way to call the base class method directly: just call BaseClassName.methodname(self, arguments) . This is occasionally useful to clients as well. (Note that this only works if the base class is accessible as BaseClassName in the global scope.)

Python has two built-in functions that work with inheritance:

Use isinstance() to check an instance’s type: isinstance(obj, int) will be True only if obj.__class__ is int or some class derived from int .

Use issubclass() to check class inheritance: issubclass(bool, int) is True since bool is a subclass of int . However, issubclass(float, int) is False since float is not a subclass of int .

9.5.1. Multiple Inheritance ¶

Python supports a form of multiple inheritance as well. A class definition with multiple base classes looks like this:

For most purposes, in the simplest cases, you can think of the search for attributes inherited from a parent class as depth-first, left-to-right, not searching twice in the same class where there is an overlap in the hierarchy. Thus, if an attribute is not found in DerivedClassName , it is searched for in Base1 , then (recursively) in the base classes of Base1 , and if it was not found there, it was searched for in Base2 , and so on.

In fact, it is slightly more complex than that; the method resolution order changes dynamically to support cooperative calls to super() . This approach is known in some other multiple-inheritance languages as call-next-method and is more powerful than the super call found in single-inheritance languages.

Dynamic ordering is necessary because all cases of multiple inheritance exhibit one or more diamond relationships (where at least one of the parent classes can be accessed through multiple paths from the bottommost class). For example, all classes inherit from object , so any case of multiple inheritance provides more than one path to reach object . To keep the base classes from being accessed more than once, the dynamic algorithm linearizes the search order in a way that preserves the left-to-right ordering specified in each class, that calls each parent only once, and that is monotonic (meaning that a class can be subclassed without affecting the precedence order of its parents). Taken together, these properties make it possible to design reliable and extensible classes with multiple inheritance. For more detail, see The Python 2.3 Method Resolution Order .

9.6. Private Variables ¶

“Private” instance variables that cannot be accessed except from inside an object don’t exist in Python. However, there is a convention that is followed by most Python code: a name prefixed with an underscore (e.g. _spam ) should be treated as a non-public part of the API (whether it is a function, a method or a data member). It should be considered an implementation detail and subject to change without notice.

Since there is a valid use-case for class-private members (namely to avoid name clashes of names with names defined by subclasses), there is limited support for such a mechanism, called name mangling . Any identifier of the form __spam (at least two leading underscores, at most one trailing underscore) is textually replaced with _classname__spam , where classname is the current class name with leading underscore(s) stripped. This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class.

Name mangling is helpful for letting subclasses override methods without breaking intraclass method calls. For example:

The above example would work even if MappingSubclass were to introduce a __update identifier since it is replaced with _Mapping__update in the Mapping class and _MappingSubclass__update in the MappingSubclass class respectively.

Note that the mangling rules are designed mostly to avoid accidents; it still is possible to access or modify a variable that is considered private. This can even be useful in special circumstances, such as in the debugger.

Notice that code passed to exec() or eval() does not consider the classname of the invoking class to be the current class; this is similar to the effect of the global statement, the effect of which is likewise restricted to code that is byte-compiled together. The same restriction applies to getattr() , setattr() and delattr() , as well as when referencing __dict__ directly.

9.7. Odds and Ends ¶

Sometimes it is useful to have a data type similar to the Pascal “record” or C “struct”, bundling together a few named data items. The idiomatic approach is to use dataclasses for this purpose:

A piece of Python code that expects a particular abstract data type can often be passed a class that emulates the methods of that data type instead. For instance, if you have a function that formats some data from a file object, you can define a class with methods read() and readline() that get the data from a string buffer instead, and pass it as an argument.

Instance method objects have attributes, too: m.__self__ is the instance object with the method m() , and m.__func__ is the function object corresponding to the method.

9.8. Iterators ¶

By now you have probably noticed that most container objects can be looped over using a for statement:

This style of access is clear, concise, and convenient. The use of iterators pervades and unifies Python. Behind the scenes, the for statement calls iter() on the container object. The function returns an iterator object that defines the method __next__() which accesses elements in the container one at a time. When there are no more elements, __next__() raises a StopIteration exception which tells the for loop to terminate. You can call the __next__() method using the next() built-in function; this example shows how it all works:

Having seen the mechanics behind the iterator protocol, it is easy to add iterator behavior to your classes. Define an __iter__() method which returns an object with a __next__() method. If the class defines __next__() , then __iter__() can just return self :

9.9. Generators ¶

Generators are a simple and powerful tool for creating iterators. They are written like regular functions but use the yield statement whenever they want to return data. Each time next() is called on it, the generator resumes where it left off (it remembers all the data values and which statement was last executed). An example shows that generators can be trivially easy to create:

Anything that can be done with generators can also be done with class-based iterators as described in the previous section. What makes generators so compact is that the __iter__() and __next__() methods are created automatically.

Another key feature is that the local variables and execution state are automatically saved between calls. This made the function easier to write and much more clear than an approach using instance variables like self.index and self.data .

In addition to automatic method creation and saving program state, when generators terminate, they automatically raise StopIteration . In combination, these features make it easy to create iterators with no more effort than writing a regular function.

9.10. Generator Expressions ¶

Some simple generators can be coded succinctly as expressions using a syntax similar to list comprehensions but with parentheses instead of square brackets. These expressions are designed for situations where the generator is used right away by an enclosing function. Generator expressions are more compact but less versatile than full generator definitions and tend to be more memory friendly than equivalent list comprehensions.

Table of Contents

  • 9.1. A Word About Names and Objects
  • 9.2.1. Scopes and Namespaces Example
  • 9.3.1. Class Definition Syntax
  • 9.3.2. Class Objects
  • 9.3.3. Instance Objects
  • 9.3.4. Method Objects
  • 9.3.5. Class and Instance Variables
  • 9.4. Random Remarks
  • 9.5.1. Multiple Inheritance
  • 9.6. Private Variables
  • 9.7. Odds and Ends
  • 9.8. Iterators
  • 9.9. Generators
  • 9.10. Generator Expressions

Previous topic

8. Errors and Exceptions

10. Brief Tour of the Standard Library

  • Report a Bug
  • Show Source

‘A Dangerous Assignment’ Director and Reporter Discuss the Risks in Investigating the Powerful in Maduro’s Venezuela

A still from FRONTLINE and Armando.info's documentary "A Dangerous Assignment: Uncovering Corruption in Maduro's Venezuela."

A still from FRONTLINE and Armando.info's documentary "A Dangerous Assignment: Uncovering Corruption in Maduro's Venezuela."

The investigation at the heart of FRONTLINE’s new documentary A Dangerous Assignment: Uncovering Corruption in Maduro’s Venezuela unfolded as Venezuelan journalist Roberto Deniz started looking into complaints about the poor quality of food distributed by a government program.

Venezuela was in the throes of economic collapse in 2016. The value of the country’s oil had fallen, leading to a deficit, and Venezuelans faced high inflation and food shortages. President Nicolás Maduro responded by launching a food program called the Local Committees for Supply and Production (Comité Locales de Abastecimiento y Producción or CLAP).

As Deniz and the Venezuelan independent news site Armando.info where he worked looked into the program, they would help uncover a corruption scheme and the figure at the heart of the scandal: Alex Saab. The documentary, made in collaboration with Armando.info, was directed by Juan Ravell, produced by Jeff Arak and reported by Deniz — who is now living and working in exile.

Deniz and Ravell spoke with FRONTLINE about the risks of reporting on Venezuela, tracing a corruption scandal that reached into the Venezuelan government and spanned continents, and the price that journalists pay for investigating the powerful in Maduro’s government.

This interview has been edited for length and clarity. Some of the responses have been translated from Spanish.

Can you both take me back to how this whole investigation started?

Deniz: This has been a long story for us — “us” being Armando.info, but also for me, as a journalist. My investigation about Alex Saab started in 2016. That was the moment in which I decided to start investigating what was happening behind the CLAP program. But in 2015, the name Alex Saab came up in an investigation about a contract that he got to build buildings for poor people in Venezuela. The moment when I realized that Alex Saab was also behind the CLAP food program, it was a big signal: This is not a simple contractor of the Venezuelan government. He’s a man who is more powerful than we could imagine.

Ravell: I wasn’t there from the beginning, but I did start collaborating with Armando.info around 2019. They were doing short pieces with different styles in their investigation, so I was making shorts for them and little videos. I remember clearly when the Alacran [Scorpion] investigation broke. An investigation by Armando.info found that opposition lawmakers worked secretly to defend some of Alex Saab’s businesses abroad. And I remember listening to the phone call that Roberto had with Venezuelan opposition politician Luis Parra and I was thinking, “This is insane that nobody’s listening to this call and so few people are aware of the job that Roberto is doing.” The way that Roberto behaved — very controlled, pressing but fair — was impressive to me. That’s when I got the initial idea. Then, when Alex Saab was detained in Cape Verde, we were pretty much convinced that this needs to be a documentary.

Alex Saab’s business network was complicated and vast. Juan, how did you decide what aspects of the story to focus on while filming this documentary?

Ravell: Roberto’s investigation led the narrative. We wanted to follow the most important stories Roberto was publishing, and those that had the most impact. The milk investigation is pretty important to Venezuelans and finding out who was behind its import. A chemical analysis requested by Armando.info showed some of the powdered milk in the CLAP boxes was so deficient in calcium and high in sodium that a researcher noted it couldn’t be classified as milk.

We knew Alex Saab before that. There had been some reporting by Armando.info, but when they connect him to the CLAP importing scheme, that’s when this story gets going. So from then on, we’re basically following Roberto through his investigation and his stories. Other journalists were also working on this case like Gerardo Reyes from Univision and Joshua Goodman from The Associated Press.

Roberto, at what point did you realize the scale of Saab’s business network and its connection to so many Venezuelan government projects?

Deniz: Since 2016, when I realized that Alex Saab was behind the CLAP program. For me, it was very clear that Alex Saab was a man that we have to investigate. The idea that he was the man behind this program to provide food to poor people in Venezuela — that Nicolás Maduro gave all this power to these guys — was a very important signal. When I started, I realized that there was a lot of fear to talk about him. Some sources immediately told me, “Well, Roberto, you have to be careful, because this is a powerful man and is very close to Nicolás Maduro.”

Roberto, you say in the film that some of the information about Saab’s dealings was difficult to uncover, and you needed to find alternative sources. Can you share the process you used to vet these sources to make sure that the information that they were providing was legitimate?

Deniz: In a country like Venezuela, there are severe threats and intimidation against the journalists that dare to do this kind of work. Normally, a journalist can access information from public records, and you can access officials and expect some kind of response. But that doesn’t happen in Venezuela. They won’t even want to acknowledge that you have contacted them.

I spoke to many of the sources that I had gathered for many years, whom I thought could have useful information about what was happening with the CLAP program. That was how I started to gain access to information, documents, papers that confirmed and signaled that Alex Saab was behind all this. You have to double-check, check three or even four times, every piece of information.

I also had many off-the-record sources. I think that over time, those sources have seen the determination that I and the team at Armando.info have had regarding this investigation, and that’s the main reason why they have trusted in our rigor and perseverance.

What was the most challenging aspect of telling the story visually?

Ravell: I’d say finding the balance. It’s a lot of documents. It’s a lot of words. It’s a lot of very dry information that we need to present in an interesting way, so I think what we managed to do is just rely on the narrative and try to find the best ways to translate that into a compelling film.

We were present in certain key moments. When Roberto’s house in Venezuela was raided, we had a camera with Roberto and we were able to interview him that night. The day of the prisoner swap — when Saab was returned from Miami to Venezuela — was interesting, because we had a team in Bogotá following Roberto and a team in Miami. So two different teams in two separate cities covering the same thing. It was an interesting experiment. And I think it comes across nicely in the film.

Roberto, you shared how reporting this story has led to you living in exile. How has that affected your ability to tell stories about what’s going on in Venezuela? What kind of challenges do you face now doing the same kind of journalism you used to do from inside the country?

Deniz: Since I had to get out of Venezuela in 2018, the most difficult thing was answering, “How can I do my work now?” It was so difficult. All of my life, since I decided to become a journalist, I was living in Venezuela, working in Venezuela. But ultimately, my exile was a solution for me, because I could keep working.

The most difficult thing, I think, is the personal part, the family. I know that all of these investigations are not easy for my family, all their grief, all the personal costs that I decided to face during all of these years.

People told me, “Wow, Roberto, you are brave,” “You are a strong person.” I am totally convinced that it’s not related to that. It’s related to our duty as journalists, our responsibility as journalists in a country like Venezuela. People don’t have the opportunity to know what is really happening in the country. I think that has pushed me to continue on in this investigation.

Many times I have thought that this is the moment to end the investigation. I cannot continue anymore. But I have to continue on what we have tried to do in Armando.info.

Can you both speak about the government’s reaction to this journalism, and what it says about press freedoms in Venezuela? What impact is the current atmosphere having on reporters still working inside Venezuela?

Ravell: It’s pretty clear from NGOs that research freedom of expression that investigative journalism and free, independent journalism is at risk in Venezuela. If you publish something and you get sued for defamation, that could end up getting you criminal charges and that can put you in jail. What Armando.info decided to do is just go in and report on hard things, subjects like corruption, and report on people who are very connected to the highest reaches of the Venezuelan government. By doing that, the choice they had to make was to leave the country. One of the few ways you can report on Venezuela is by going into exile. Still, in exile, there are risks, as you can see in the film. Roberto’s house in Venezuela was raided right before Alex Saab was extradited. So he’s in exile, and he’s still persecuted.

Deniz: I have been in exile since 2018, and nowadays I don’t feel that I am safe living abroad. I think that shows how powerful the message of an autocratic government is when they decide to oppose the work of independent journalists. If you see all the stories related to the Alex Saab case, the first legal action that I faced was in 2017 when he decided to sue me. I could face jail if I stayed in Venezuela. I’m totally sure about that. But then in 2021, I got a new legal action against me. I think that is a clear message that even if you get out of Venezuela, but you continue with your work, you are going to face all of the power of the Venezuelan government. It’s so sad for us as journalists.

Shortly before the premiere of this film, the Venezuelan government began responding to the documentary. Can you give us your take on their response?

Deniz: The attorney general of Venezuela accused us — Ewald Scharfenberg, editor and founder of Armando.info, and me, as a reporter — of supposedly being part of and benefiting from a “corruption scheme” related to Venezuela’s ex-oil minister, Tarek El Aissami, who was incarcerated some weeks ago and who’s been questioned for more than a year within a corruption investigation in PDVSA, the Venezuelan state-owned oil company.

It’s not a coincidence that this is happening right after we released the documentary’s trailer. For me, it’s more than evident that this accusation is total nonsense, but that doesn’t make it less serious, because this is a criminalization of the journalism that we have been doing in Armando.info. Sometimes I think that if you compare the work of Armando.info with all the power of the Venezuelan government, we’re like a dwarf fighting a giant, a tiny particle against a huge government, but that only shows you the authoritarian nature of this regime. They won’t tolerate, they won’t accept that some people persist and keep investigating.

Watch the full documentary A Dangerous Assignment: Uncovering Corruption in Maduro’s Venezuela :

Max Maldonado

Max Maldonado , Tow Journalism Fellow, FRONTLINE/Newmark Journalism School Fellowships , FRONTLINE

More stories.

4212_SG_005

Where Does School Segregation Stand, 70 Years After Brown v. Board of Education?

Roberto Deniz A Dangerous Assignment

‘It Would Have Been Easier To Look Away’: A Journalist’s Investigation Into Corruption in Maduro’s Venezuela

FL_AThousandCuts_Art

FRONTLINE's Reporting on Journalism Under Threat

Lethal Restraint Prone Training

Risks of Handcuffing Someone Facedown Long Known; People Die When Police Training Fails To Keep Up

Documenting police use of force, get our newsletter, follow frontline, frontline newsletter, we answer to no one but you.

You'll receive access to exclusive information and early alerts about our documentaries and investigations.

I'm already subscribed

The FRONTLINE Dispatch

Don't miss an episode. sign-up for the frontline dispatch newsletter., sign-up for the unresolved newsletter..

Enterprise office colleagues collaborating in open work space.

  • Tips and guides
  • Dynamics 365 Sales
  • IT Professionals

Support parallel working with multiple sequences in Dynamics 365 Sales 

  • By Akhilesh Shukla, Sr. Product Manager
  • Rik Roy Chowdhury, Senior Product Manager
  • Content type
  • News and product updates

A guide for sales managers and sellers who want to improve their customer engagement and collaboration with multiple sequences now available across Dynamics 365 Sales. 

Productivity and efficiency are important to sales teams. Improving customer engagement, as well as collaboration when multiple team members work on an account, can be key to securing deals faster, and bringing better business results. With the support of multiple sequences in Dynamics 365 Sales, this now becomes easier than ever. 

In this blog, we will show you how you can streamline parallel sales processes, coordinate your sales efforts, and optimize your customer interactions. You will also learn how to create, manage, and monitor multiple sequences in Dynamics 365 Sales. Whether you are a sales manager or a seller, we will cover some useful tips and best practice to make the most of this powerful feature. 

Ready to boost your sales performance with multiple sequences? Sign up for a free trial of Dynamics 365 Sales today and discover how it can transform your sales organization. 

What are sequences and why do you need them? 

Sequences are a series of steps that sellers can follow to engage with customers and prospects in a consistent and effective way. They help sales managers provide guidance on best practices and ensure that every customer interaction is aligned with the sales strategy. Sequences can include various types of activities, such as emails, phone calls and tasks. In addition, sellers can utilize sequences as well, to automate their successful selling formulas and reduce manual work.

But what if you have multiple team members working on the same record (for example an account)? How can you ensure that they are not stepping on each other’s toes or sending conflicting messages to the customer? How can you leverage the expertise and skills of different sellers to create a better customer experience? 

This is where multiple sequences come in handy. Multiple sequences allow you to connect more than one sequence to a record, so that different sellers can work simultaneously on the same record with different sequences. For example, you can have an account manager and a solution architect working on the same opportunity, each with their own set of activities. This way, you can optimize your customer engagement and collaboration to drive better business outcomes. 

How to connect multiple sequences to a record? 

There are two ways to connect multiple sequences to a record: manually and automatically. 

  • Manually connecting a record to a sequence : Connect a record to a sequence by using the connect sequence button on the record page to launch the connect sequence dialog. You can connect multiple sequences to a record at the same time, as long as the record owner or the sequence owner has the relevant permissions to do so. You can also disconnect a sequence from a record manually, by selecting the disconnect sequence button on the record page. 
  • Automatically connecting a record to a sequence : Connect a record to a sequence by using the segmentation feature . Segmentation allows you to define criteria for a group of records that qualify for a sequence. For example, you can create a segment for all the opportunities that have a high probability of closing in the next quarter. You can then associate a sequence to that segment, so that whenever a record meets the criteria, it is automatically connected to the sequence.

variable multiple assignment

How to assign a sequence to a different user than the record owner? 

By default, when a record is connected to a sequence, the sequence is assigned to the record owner. However, you may want to assign a sequence to a different user, depending on their role and responsibilities. For example, you may want to assign a sequence to a specialist role for a record, such as a solution architect or a technical consultant. 

To do this, you can use the sequence assignment feature . Sequence assignment allows you to select a field in the record entity or a related entity that can be used to assign the sequence. For example, if you have a field called opportunity_rep in the opportunity entity, you can assign the sequence to the user who is specified as the opportunity rep for that record. You can also use the properties pane to assign the sequence to the account owner, or owner/access team with capability to assign the sequence to a user with a specific role in the respective team. 

graphical user interface, application

How to view the connected sequences and users for a record? 

Once you have connected multiple sequences to a record, you may want to view the connected sequences and the users who are working on them. This can help you get a better understanding of the customer engagement and collaboration happening on the record and what work is left to execute. 

To see the sequences and users that are linked to a record, you can select the sequence title from the Up next widget which will take you to the preview pane that shows all the sequences that are related to the record. The sequence preview gives you a full overview of the sequence, including the progress and activities for different paths. To see a sequence, choose the name of the sequence in the Up next widget. The sequence opens in a pane showing the list of activities that have been set up within it. 

You can also use the sequence stats report to see status, progress, and performance of each sequence. In addition, you can see the number of completed, overdue, and upcoming activities, as well as each email’s open rate, click rate, and conversion rate in the sequence.  

graphical user interface, application

How to view sequence steps in a record using the Up next widget? 

In cases where a record is associated with multiple sequences, you may want to efficiently plan the execution by accessing all the available steps of these sequences. The new enhancements empower you to achieve precisely that! Now, the sequence name showcased on the Up next widget transforms into a clickable link. Upon clicking, it reveals a comprehensive list of all steps associated with that specific sequence. This feature facilitates the simultaneous viewing of both executed and upcoming steps in a single pane, streamlining the planning process for the subsequent steps. 

graphical user interface, text, application, email

Conclusion 

Multiple sequences in Dynamics 365 Sales are a powerful capability that can help you improve your customer engagement and collaboration:

  • By connecting multiple sequences to a record, you can optimize your sales process and leverage the skills and expertise of different sellers.
  • By assigning a sequence to a different user than the record owner, you can ensure that the right person is doing the right activity.
  • By viewing the connected sequences and users of a record, you can get a better insight into the customer communication and collaboration happening on the record.

With multiple sequences, you can drive better business outcomes and gain a competitive edge in the marketplace. 

Learn how to Improve Sales process efficiency using sequence insights – Microsoft Dynamics 365 Blog   

Learn more about sequences and how to create them:   Sequences in sales accelerator | Microsoft Learn   

Learn more about segments in sequences:   Create segments and connect them to sequences | Microsoft Learn  

Explore our getting started templates to quickly create sequences and try them for yourself:   Sequence templates | Microsoft Learn   

Don’t have Dynamics 365 Sales yet? Try it out now:  Sales Overview – Dynamics Sales Solutions | Microsoft Dynamics 365  

Akhilesh Shukla

Related posts

a woman is talking to a man whose back is turned to the camera, they each have laptops in front of them.

Improve Sales process efficiency using sequence insights  

Photo of a seller working with a sales dashboard on a laptop computer.

New sales sequences experience improves seller productivity  

two sales reps celebrating a win

Winning sales sequences: A/B testing guide  

A female and male employee discuss Dynamics 365 implementation projects in an office.

Create Dynamics 365 implementation projects easily with the new onboarding wizard  

REVIEW article

A review of common statistical methods for dealing with multiple pollutant mixtures and multiple exposures.

Guiming Zhu,

  • 1 Department of Health Statistics, School of Public Health, Shanxi Medical University, Taiyuan, China
  • 2 Key Laboratory of Coal Environmental Pathogenicity and Prevention (Shanxi Medical University), Ministry of Education, Taiyuan, China

Traditional environmental epidemiology has consistently focused on studying the impact of single exposures on specific health outcomes, considering concurrent exposures as variables to be controlled. However, with the continuous changes in environment, humans are increasingly facing more complex exposures to multi-pollutant mixtures. In this context, accurately assessing the impact of multi-pollutant mixtures on health has become a central concern in current environmental research. Simultaneously, the continuous development and optimization of statistical methods offer robust support for handling large datasets, strengthening the capability to conduct in-depth research on the effects of multiple exposures on health. In order to examine complicated exposure mixtures, we introduce commonly used statistical methods and their developments, such as weighted quantile sum, bayesian kernel machine regression, toxic equivalency analysis, and others. Delineating their applications, advantages, weaknesses, and interpretability of results. It also provides guidance for researchers involved in studying multi-pollutant mixtures, aiding them in selecting appropriate statistical methods and utilizing R software for more accurate and comprehensive assessments of the impact of multi-pollutant mixtures on human health.

www.frontiersin.org

Graphical Abstract .

1 Introduction

In the contemporary industrialized society, environmental concerns such as air pollution, water pollution, and soil contamination have gained significant attention ( 1 – 4 ). Some pollutants are metabolized because of their shorter half-lives, others, such as heavy metals, insecticides, flame retardants, persistent organic pollutants, and other endocrine-disrupting chemicals continue to accumulate in the human body and have significant and long-term effects on human health ( 5 – 8 ). For instance, the association between heavy metals toxicity and the development of neurodegenerative diseases and various ocular pathologies has been established, while concurrent exposure to heavy metals can elevate the risk of prostate cancer and thyroid enlargement ( 9 – 11 ). Polybrominated diphenyl ether is a persistent and pervasive environmental pollutant that disrupts the human endocrine system, leading to health implications such as developmental, thyroidal, and reproductive toxicity ( 12 , 13 ). Particulate matter along with nitrogen oxides in the atmospheric environment exhibit a close correlation with stroke incidence rate and mortality. The higher the concentration of particulate matter exposure, the greater risk of stroke ( 14 ). However, most studies are on single pollutants, they do not accurately reflect the real world since people are exposed to a combination of several dangerous substances at any given time, which might have antagonistic or synergistic effects. What’s more, single-pollutant analysis methods often fall short in capturing the complexity and interactive effects of multi-pollutant mixtures ( 15 ). Lastly, the potential for spurious associations has increased in single-pollutant models, contributing to disagreements between studies. Consequently, accurately assessing the health effects of exposure to mixtures of environmental pollutants has become a focal point in current environmental epidemiology. In recent years, the focus of health effect assessments of environmental risk factors has shifted from the traditional single-pollutant approach to the study of mixtures of multiple pollutants ( 16 – 18 ). This shift aims to more accurately reflect the impact of environmental risk factors on human health.

In order to handle the multi-pollutant mixtures exposure, researchers have raised a number of problems that need to be addressed, as shown in Table 1 .

www.frontiersin.org

Table 1 . Key research questions on multi-pollutant mixtures exposure.

The study of multi-pollutant mixtures is characterized by two primary focuses: (A) Estimating the effects of pollutants and (B) addressing the complexity associated with multi-pollutant mixtures.

Regarding effects estimation, the focus is divided into three aspects: (1) Overall effects of multi-pollutant mixtures; (2) Independent effects of components within multi-pollutant mixtures; and (3) Joint effects of mixture components.

Addressing the complexity of multi-pollutant mixtures involves three main aspects: (1) Addressing the challenge of high-dimensional data when multiple chemical substances are present in the model; (2) Resolving the issue of high correlations among pollutants to assess synergistic or antagonistic effects; and (3) Addressing interplay and non-linear effects among pollutants.

Thus, diverse statistical methods are introduced in this paper to address specific issues, as shown in Graphical abstract.

2 Methods for effects estimation

In estimating overall effects of multi-pollutant mixtures, two main approaches are commonly used: treating the mixtures as a single exposure or analyzing the weighted sum of exposures ( 17 ). For example, particle matter concentration serves as a comprehensive measure of particulate matter components in ambient air, assuming equal impact on health for all components ( 23 ). Alternatively, overall effect estimation can involve the weighted sum of individual component effects, with weights based on toxicological potency or contribution percentage ( 17 ). For instance, modeling phthalate metabolites’ concentrations using molar sum or potency-weighted sum methods ( 24 , 25 ). Independent effects estimation considers diverse pollution components’ varied adverse effects, requiring the evaluation of overall mixture impact before assessing individual component effects ( 26 ). Joint effects estimation accounts for component interactions beyond additive impacts ( 18 ), considering mechanisms and pathways. For example, non-volatile carbonaceous particles like black carbon may require specific attention when studying joint health effects with volatile organic compounds. In this section we briefly introduce several methods of effects estimation. Figure 1 shows details of the effect estimation methods and R packages for their implementation.

www.frontiersin.org

Figure 1 . Overview of the methods and R packages for implementation.

2.1 Weighted quantile sum regression

Weighted quantile sum (WQS) regression is a convenient tool for addressing effect estimation problem, the high-dimensional and highly correlated issues among multiple pollutants, particularly among homogenous pollutants ( 27 ). It is widely used in studies of environmental exposure to multi-pollutant mixtures and enables the identification of high-risk factors. This model allows the construction of a weighted index in a supervised manner to assess the overall effects of environmental exposure and the contribution of each component in the mixture to the overall effect. WQS calculates individual exposure characteristics by weighting based on the correlation between exposure and outcome, resulting in a composite index value for various exposure components. This index characterizes the levels of mixed exposure to a range of exposure components and evaluates the impact of each component on health outcomes. Following Tanner’s recommendation, introducing a bootstrap step in the WQS yields stable weights for exposure components and WQS index estimates ( 28 ). The core idea is to construct WQS to achieve dimensionality reduction, address multicollinearity issues, and filter high-risk factors through the weighting process. The most recent weight coefficient for each component in the exposure index represents its contribution to health outcomes.

WQS has advantages in analyzing multifactor exposure due to their simple model structure, small computational burden, and fast analysis speed. But the “directional consistency” precondition must be met, i.e., the effects of each component in the mixture are all in the same direction (all positive or all negative). Recent research has explored and developed new methods for WQS. For unidirectional hypotheses, methods such as quantile g-computation combined with the g-algorithm ( 29 ), grouped WQS ( 30 ), and Bayes group WQS model ( 31 ) have been developed. Focus has also been given to the lagged WQS to address time-varying exposure mixtures ( 32 , 33 ).

2.2 Bayesian kernel machine regression

Bayesian kernel machine regression (BKMR) also provides a new approach to analyze multi-pollutant mixtures ( 34 , 35 ). In contrast to WQS, BKMR provides probabilities included in the total effects of multi-pollutant mixtures, rather than estimate the percentage contribution of this effect but provides probabilities included in the total effects of multi-pollutant mixtures. It visualizes various exposure-response shapes. BKMR can also examine the independent impacts of mixture components by considering the effects of keeping other components constant at predetermined percentiles, such as the 50th percentile of the exposure distribution. BKMR does not require setting a parameter expression, allowing for the presence of nonlinear effects and interactions. It generates kernel functions based on the mixture variables included in the model, followed by Bayesian sampling and analysis methods to generate relationship curves between mixture components and disease variables included in the model.

In addition to analyzing the mixture’s overall impacts and each component’s effects separately, BKMR also estimates any possible interactions between the distinct components. Posterior inclusion probabilities (PIPs) generated by BKMR range from 0 (least important) to 1 (most important). Components with PIP ≥ 0.5 are identified as relatively important mixture components. BKMR can also be used to study possible three-way interactions. This is achieved by fixing one of the exposures at different quantile levels and visualizing the exposure-response functions for the remaining two exposures. Overall, BKMR has been widely used in environmental health research, including the analysis of continuous variables, binary variables, and repeated measurement data ( 36 , 37 ). The advantages of this method include the ability to simultaneously assess the importance of each variable, analyze data with uncertainty, and easily extend the obtained results to longitudinal data.

Although BKMR can effectively assess the health effects of multi-pollutant mixtures, it has certain limitations. Firstly, when using the BKMR, the studied exposure variables must be continuous, and the size of PIPs is easily influenced by adjustment parameters. So, caution is required when interpreting results, as this method may obscure the underlying complex features of the data. If some components in the mixture are positively correlated while others are equally negatively correlated, the final overall result will appear as if there is no correlation, and other methods are needed to verify the estimation of their interactions. In addition, considering causality, time-varying exposure, or computational efficiency in massive datasets, the traditional implementation of BKMR may be limited. Several new methods have extended the BKMR strategy to address these limitations, such as bayesian kernel machine regression – causal mediation analysis (BKMR-CMA) ( 38 ), bayesian kernel machine regression distributed lag model (BKMR-DLM) ( 39 ).

2.3 Toxicity equivalency analysis

In addition to the two commonly used estimation methods mentioned above, pollutants with similar mechanisms of action and the same endpoint from a toxicological perspective exhibit additive toxicity. However, individual pollutants contribute differently to the overall health risk. Therefore, a normalization method, known as toxicity equivalency factor (TEF) analysis, a normalizing technique, is required. TEF is generally obtained by comparing the “starting point” of health risk assessments for standard reference compounds with the respective compounds. The exposure dose of a mixture, commonly represented as toxicity equivalent quantity (TEQ), is calculated by multiplying the TEF for each compound by its exposure metric and summing them. By combining TEQ with reference metrics such as the reference dose (RfD) or carcinogenic slope factor, the health risk of the mixture can be assessed ( 40 – 43 ). TEF represents the relative toxicity of an isomer of a compound and is set to 1 for the most toxic 2,3,7,8-TCDD. Other pollutants’ toxicities are converted to their corresponding relative toxic intensities. Alternatively, TEF can be the toxicity equivalency factor for individual Polycyclic Aromatic Hydrocarbons, with a TEF of 0.001 for Pyrene. Daily total intake exposure metric from plasma polycyclic aromatic hydrocarbon levels based on pharmacokinetic models ( 40 – 42 ). The results can be compared against specified standards to determine the presence of carcinogenic risk ( 43 ). To address non-linear problems, the acceptable concentration range model has been developed based on the RfD concept ( 44 ).

TEF has the advantage of being easy to understand and directly associated with real exposure and toxicity data. However, it requires available toxicity and exposure data for each chemical, making the assessment results dependent on the selection of indicative chemicals and the quality of toxicological information. Uncertainty in the chemicals significantly affects the uncertainty of risk assessment results.

2.4 Other methods for effects estimation

In addition to the three statistical methods for estimating effects mentioned above, there are also novel and unique methods for effect estimation, although they may have a narrower audience. These include bayesian regression trees ( 45 ), bayesian data synthesis (BDS) ( 46 ), bayesian subset selection and variable importance for interpretable prediction and classification (BSSVI) ( 47 ), directed acyclic graph analysis ( 48 ), bayesian treed distributed lag model (DLMtree) ( 49 ), factor analysis for interactions (FIN) ( 50 ), parametric decision analysis method ( 51 ), graph laplacian-based gaussian Process (GL-GPs) ( 52 ), computational improvements for bayesian multivariate regression models based on latent meshed gaussian processes (GriPS) ( 53 ), and multiple exposure distributed lag model with variable selection ( 54 ).

3 Methods for dimensionality reduction

When analyzing multi-pollutant mixtures with fewer components, the process is relatively simple, but the dimensionality of the data increases dramatically when multi-pollutant mixtures contain several components. Many statistical methods lack the capability to address this issue, and even methods designed to handle the complexity of high-dimensional data incur exponential time costs as the data dimensionality grows. Furthermore, the high correlation among components may lead to multicollinearity. For instance, analyzing correlated components with similar sources, exposure pathways, or metabolic processes, regardless of which one is individually studied, may yield biased conclusions. Faced with the challenges of high dimensionality and multicollinearity, a crucial aspect of studying the health impacts of multi-pollutant mixtures involves learning low-dimensional structures in the data to enhance interpretability and statistical efficiency, employing methods of dimensionality reduction proves to be a favorable approach. In this section we briefly introduce several dimensionality reduction methods. Figure 1 shows details of the methods for dimensionality reduction and R packages for their implementation.

3.1 Principal component analysis and factor analysis

Principal component analysis (PCA), introduced by Pearson for non-random variables and later extended to random vectors by Hotelling, transforms a set of potentially correlated variables into a set of linearly uncorrelated variables, referred to as principal components, through orthogonal transformation ( 55 ). The primary objective of PCA is to explain the majority of variance in the original data using fewer variables, converting highly correlated variables into ones that are independent or uncorrelated. When analyzing the relationship between multiple pollutant indicators and health, PCA can reduce the number of indicators for analysis, minimizing information loss from the original indicators and facilitating comprehensive data analysis. It simplifies high-dimensional exposure data into several orthogonal components usable for regression models, thus mitigating multicollinearity issues. For example, Smit applied PCA to estimate the relationship between the risk of asthma and eczema in school-age children and 16 pollutants in their mothers’ serum. The study ultimately incorporated indicators from five principal components, explaining 70% of the variance in the outcome variable ( 56 ).

PCA’s main limitations include difficulty in interpreting results, as the components are not in the same units as the original exposure variables, and the derived components may lack a direct relationship with study outcomes as they are derived in an unsupervised manner. Subsequently, PCA has evolved into methods like supervised PCA, which overcomes these issues by excluding “pollutants” that do not provide information directly related to the outcomes ( 57 ). Roberts applied this method to air pollution analysis, proposing a recursive algorithm that identifies the optimal predictor for study outcomes and combines it into several relevant principal components ( 58 ). Other developments include principal component pursuit (PCP), an analysis method based on matrix factorization, extended to multi-pollutant mixtures by Gibson. Through cross-validation in simulations, PCP identified the true number of patterns in all simulations, while PCA achieved this in only 32% of simulations, demonstrating PCP’s superiority in most simulation scenarios ( 59 ). In addition to the above methods, Positive matrix factorization (PMF) is a variant of PCA applicable to multi-pollutant profiles, deriving air pollution sources from individual chemical components ( 60 ). Specifically, PMF decomposes the matrix of mixture data into two matrices—source contributions and source profiles. Source contributions represent the mass contribution of each source to the mixture measurements, while source profiles reflect the emission types from a given source. Source contributions are constrained to be non-negative, and the method can incorporate uncertainty measurements related to the data at each point ( 61 – 64 ).

Factor analysis (FA) is another commonly used dimensionality reduction method that groups variables based on the correlation matrix, creating common factors that represent the fundamental structure of the data. It decomposes multidimensional variables into a small number of common factors, where the fundamental idea is to break down original variables into two parts: one part is a linear combination of common factors that condenses a vast majority of information in the original variables, and the other part is special factors unrelated to common factors, reflecting the gap between the linear combination of common factors and the original variables. In other words, FA aggregates numerous variables into a few independent common factors with minimal loss or little loss of original data information. These common factors can reflect the essential information of numerous variables, reduce the number of variables, and reveal the inherent connections among variables. Perturbation FA is commonly used in multi-pollutant mixtures studies, focusing on exploring the similarities and differences in exposure conditions among different groups. For example, Roy used this method to assessed the differences in exposure characteristics in biological or social structures based on race/ethnicity ( 65 ).

Both PCA and FA seek a small number of variables to comprehensively reflect the majority of information in all variables. While the number of variables is fewer than the original variables, the information contained is substantial, and the reliability of using these new variables for analysis remains high. Moreover, these new variables are uncorrelated, eliminating multicollinearity and achieving dimensionality reduction. In PCA, the newly determined variables are linear combinations of the original variables, obtained through coordinate transformation. In contrast, FA aims to explain the complex relationships present in many observed variables using a small number of common factors. It does not recombine original variables but decomposes them.

3.2 Clustering analysis

Clustering analysis (CA) organizes all data into clusters, groups of similar elements, where instances within the same cluster are similar to each other, while those in different clusters are dissimilar. Similarity among data is determined by defining a distance or similarity coefficient ( 66 ). Once several clusters are identified, the next step is to select a representative prototype for each cluster. CA can be matched with exposure data to define groups, and indicators of group members can then be used as predictor variables in health outcome regression models.

Clustering can be categorized into different groups based on techniques, with partition-based clustering being the most widely used, where k-means is a common approach ( 67 ). One advantage of the k-means method is its linear complexity, making its execution time proportional to the number of individuals, making it suitable for large datasets. However, the choice of initial centers and the number of clusters is arbitrary and can influence the results. Nevertheless, hierarchical classification can be applied to the cluster centers obtained from the k-means method. Clustering has been used in several studies to assess the impact of various pollutants. For instance, in time series analysis of air pollution, one study used k-means to divide days into five groups representing days with low pollution levels, high concentrations of crustal particles, high particle content from traffic and combustion of oil, days influenced by regional pollution sources, and days with high concentrations of particles from wood or oil burning ( 68 ). Some clusters were associated with pulse amplitude. Similarly, based on pollutant characteristics and community background, an evaluation was conducted on the correlation between NO 2 , NO, and PM 2.5 concentrations and low birth weight ( 69 ).

The challenges of CA lie in the selection of clusters, classification methods, and the number of clusters. CA facilitates the distinct grouping of various entities, making it challenging to summarize them under a single label. The process typically necessitates the initial selection of appropriate distance metrics, clustering algorithms, and the number of clusters. These choices often based on users’ subjective judgments by the user, and different selections may yield disparate clustering outcomes, thereby rendering the results subjective.

4 Methods for variable selection

When analyzing multi-pollutant mixtures with many components, it is not necessary to estimate the impact of each component of mixture, rather, the focus is on investigating the effects of a few crucial components that exhibit the maximum toxicity to human health and/or have the highest predictive power for the outcomes of interest. Therefore, it is imperative to employ appropriate methods for identifying or selecting important variables that represent the exposure-response relationship between individual exposures in the mixture and the outcomes. Uch methods are frequently known as “variable selection.” In this section we briefly introduce several methods of variable selection. Figure 1 shows details of the methods for variable selection and R packages for their implementation.

4.1 Partial least squares

Partial least squares (PLS) regression combines principal component analysis and multivariate regression, taking into account the correlation between the outcomes and exposure variables ( 70 ). In essence, PLS regression searches for a linear decomposition of the exposure matrix that maximizes the covariance between exposure and outcomes. The exposed variable has a higher weight in the linear combination, the stronger the association between it and the outcome. PLS regression can also include multiple outcome variables. The optimal number of components can be selected based on cross-validated mean squared error ( 71 ). However, a drawback of PLS regression is that the interpretation of the linear combination can be challenging, especially in the presence of a large number of original exposure variables.

To address this limitation, Chun and Keles introduced a method called sparse PLS regression, which simultaneously combines variable selection and dimensionality reduction ( 72 ). This method results in a linear combination of exposure variables with reduced quantity. Sparsity is introduced into the loadings of exposure variables through penalty terms. The optimal number of components and sparsity parameters are selected based on cross-validated performance. This method has been applied in simulation studies related to exposure-health associations. In one simulation study involving 237 generated exposure covariates, 0 to 25 of which were related to the outcomes, sparse PLS regression demonstrated better sensitivity in distinguishing true predictive factors from correlated covariates ( 73 ).

4.2 Deletion/substitution/addition algorithm

The Deletion/Substitution/Addition (DSA) algorithm is a variable selection method ( 74 , 75 ). The main steps include: (1) removal of selected variables; (2) substitution of selected variables with unselected ones; and (3) addition of new variables. By using five-fold cross-validation to minimize the root mean square error (L2 loss function) of the prediction equation, the number and particular kinds of variables included in the model are ascertained. To ensure selection stability, DSA is run with different seed numbers for 50 iterations. Subsequently, a binomial generalized linear model evaluation is conducted for multi-exposure variables. Variables included in the final model are those selected in at least 6% ( n  ≥ 3 times) or 10% ( n  ≥ 5 times) of DSA iterations, and multicollinearity is validated in the final model.

Compared to traditional linear regression equations, this method reduces the false-positive rate, allows for mutual adjustments between variables, and explores interactions between chemicals. However, its effectiveness is limited when exploring interactions involving chemicals with low detection rates. The algorithm also provides the possibility of including interaction terms. In contrast to stepwise model selection procedures, DSA has the advantage of being less sensitive to outliers and permits movement between non-nested statistical models. In previous applications, the DSA algorithm has been utilized in multi-pollutant mixtures analysis, estimating the relationship between O 3 , CO, NO 2 , PM 10 and lung function ( 76 ). However, DSA has faced criticism, particularly when the ratio of sample size to the number of candidate predictors is small, leading to inconsistent estimates. Moreover, its statistical properties for confidence intervals are compromised, when there is substantial correlation between predictors ( 77 ).

4.3 Penalty-based algorithms

Least absolute shrinkage and selection operator (LASSO) regression is highly similar to ordinary least squares, with the key difference lying in the estimation of coefficients through the minimization of a slightly different quantity, resulting in a shrinkage penalty on the coefficients’ magnitudes ( 78 ). It penalizes the absolute size of regression coefficients based on the value of the tuning parameter λ. Consequently, LASSO can drive coefficients of irrelevant variables to zero, thereby performing automatic variable selection. When the tuning parameter λ is small, the results essentially converge to least squares estimation. Elastic net (ENET) combines the LASSO method with ridge regression (RR) ( 79 ),it includes first and second-order penalty terms on the regression coefficients. Thus, it not only selects the best subset of variables by precisely shrinking some effect estimates to zero through LASSO but also retains a set of highly correlated variables in a RR model with similar effect estimates. For instance, in the Veterans Affairs Normative Aging Study, the use of LASSO enables the selection of PM 2.5 components related to blood pressure ( 80 ). In a recent study, based on ENET penalized regression, two metabolites of phthalates were found to be consistently associated with impaired fetal growth ( 81 ). The group-lasso interaction-net method extends LASSO to select bidirectional interaction terms ( 82 ), allowing for the simultaneous use of LASSO while controlling the false discovery rate ( 83 ). A key characteristic of LASSO is the introduction of an L1 regularization term in estimation, leading to the precise compression of certain coefficients to zero, thereby achieving feature selection. Nevertheless, this excessive sparsity may render the model overly sensitive to noise, and the selected features may prove unstable across different datasets.

4.4 Machine learning approaches

Machine learning (ML) is a research methodology focused on discovering patterns within data and utilizing these patterns to make predictions. Variable selection is a crucial issue in the field of ML, as the predictive performance of models is influenced to some extent by the variables included in the model. The number of variables, variables’ correlations, and the inclusion of important variables significantly impact the accuracy and efficiency of predictive models. Therefore, variable selection plays an indispensable role in constructing predictive models. Numerous ML algorithms are currently available for variable selection based on variable importance. Common methods include classification and regression trees ( 84 ), random forest (RF) models ( 85 ), support vector regression (SVM) ( 86 ), K-nearest neighbors (KNN) ( 87 ), naive bayes ( 88 ), neural networks ( 89 ), adaptive boosting (AdaBoost) ( 90 ), gradient boosting (GBM) ( 91 ), eXtreme gradient boosting (XGBoost) ( 92 ), light gradient boosting machine (LightGBM) ( 93 ), CatBoost ( 94 ), and others are examples of common techniques.

While ML demonstrates effective results, they often face challenges related to interpretability. For instance, models like XGBoost or LightGBM, comprised of N trees, make it difficult to understand how the features of a specific sample influence the final result. To address this issue, SHAP (shapley additive explanations) provides a method for explaining ML, offering detailed and interpretable information about model predictions ( 95 ). As the demand for incorporating complex high-dimensional data in environmental health research continues to grow, researchers are increasingly turning to ML. Recent studies have employed various ML such as AdaBoost, SVM, RF, decision tree classifier (DT), and KNN to identify the relationship between heavy metal exposure and coronary heart disease. Integrated with SHAP, these studies explained ML, determining the contributions of heavy metals such as cesium, thallium, antimony, dimethyl arsenic acid, barium, and arsenic acid in urine to the risk of coronary heart disease. This increases the likelihood that coronary heart disease can be detected and treated early ( 96 ). Some studies have also used multilayer perceptron, RR, gradient boosting decision tree, voting classifier, and KNN algorithms for generating optimal predictive models for multiple heavy metals causing hypertension. These studies integrated permutation feature importance analysis, Partial Dependence Plots, and SHAP methods into a single process, embedded within ML for model interpretation ( 97 ). However, most of the mentioned ML models are used for prediction and require comparisons based on accuracy, sensitivity/recall, specificity, negative predictive value, false positive rate, false negative rate, and F1 score.

4.5 Bayesian variable selection methods

ML algorithms such as RF can provide measures of variable importance for mixed components, but these measures do not succinctly capture the overall magnitude or direction of their associations. Variable selection techniques within the regression framework, such as LASSO, shrink individual regression coefficients to zero. However, these techniques are typically based on relatively simple models of mixed components parameters. To systematically address highly correlated exposures, the BKMR employs a hierarchical variable selection approach. This method can incorporate prior knowledge about the exposure variable/mixed component correlation structure to provide PIPs, as detailed in Section 2.2.

5 Methods for identifying multi-exposure interactions

Although various components in multi-pollutant mixtures may have completely independent effects on health outcomes, in many cases, there may be interactions among components in the mixtures. Interactions represent the mutual dependence effects of two or more variables and can manifest as synergistic, additive, or antagonistic effects ( 98 ). A typical example of interaction is the additive synergistic effect of O 3 and particulate matter on the incidence of cardiovascular diseases ( 99 ). In the real world, interactions among various exposure pollutants may exist, and the analysis of these interactions aims to identify and explain their effects. Analyzing and interpreting interactions among multiple exposures can provide a more comprehensive understanding of exposure patterns and identify cooperative effects between specific exposures under certain conditions. In this section we briefly introduce several methods for identifying interaction effects. Figure 1 shows details of the methods for identifying multi-exposure interactions and R packages for their implementation.

5.1 Basic interaction analysis

In interaction factor analysis, analysis of variance (ANOVA) is commonly used to test whether interaction effects among multiple exposures are significant. By comparing the F -values or p -values of individual factors and interactions, it is possible to determine which factors exhibit interaction effects. Additionally, regression analysis can also be used to explore the interaction effects of exposures, perform significance tests, and create a relationship model between exposure interaction terms and outcomes.

5.2 Bayesian statistical framework

Apart from BKMR, Antonelli utilized a semi-parametric Bayesian sparse prior regression framework to generate variable importance scores for each exposure and each pairwise interaction in the mixture ( 100 ).

5.3 Structural equation model

A technique called the structural equation model (SEM) combines particular covariance and regression sets between certain variables into a single coherent model ( 101 ). It is used to test and estimate relationships between observed data and latent variables, as well as to assess the fit of theoretical models. SEM combines various techniques such as FA and path analysis, allowing researchers to simultaneously explore complex relationships between multiple variables. In SEM, a measurement model can be constructed to capture measurement errors and covariances among different exposure factors and health outcomes. This aids in accurately measuring these factors and accounting for measurement errors. It is also possible to determine the causal links between various exposure factors and health outcomes using structural models.

SEM is useful for estimating and understanding the network of relationships between variables (latent, observed, and error variables) and it is also employed to estimate the degree of model fit and allows for the presence of measurement errors in independent and dependent variables ( 102 ). As researchers turn to modeling multi-pollutant mixtures, SEM is increasingly used to estimate the impact of multi-pollutant mixtures on health ( 103 , 104 ). For instance, SEM assessed the relationship between respiratory function, tobacco smoke exposure, and volatile organic compound exposure in a nationally representative sample of adolescents, revealing associations between respiratory function and certain types of volatile organic compounds ( 104 ). It is noteworthy that a critical feature of SEM analysis is its requirement to meet some basic assumptions of traditional statistical analyses, such as linearity and normality; otherwise, the obtained statistical data may be unreliable.

6 Methods for nonlinear effects

Numerous epidemiological studies have identified nonlinear associations (U-shaped, inverted U-shaped, J-shaped, etc.) between mixed pollutant exposures and health outcomes. For example, the relationship between plasma heavy metals concentrations and type 2 diabetes ( 105 ), as well as the association between volatile organic compounds and heart rate variability index ( 106 ). Ignoring the potential nonlinearity may result in biased conclusions. Therefore, a better approach is to fit the nonlinear relationship between exposure and outcome. In this section we briefly introduce several methods for estimating nonlinear effects. Figure 1 shows details of the methods for nonlinear effects and R packages for their implementation.

6.1 Spline regression and quantile regression

To overcome the limitations of polynomials, spline methods are often used for curve fitting, employing a piecewise function strategy instead of complex polynomials. One commonly used method in pollution studies is restricted cubic spline (RCS), which fits the curve relationship between a variable and an outcome using restricted cubic spline terms. For instance, Zhou combined RCS with logistic regression to estimate the relationship between typical heavy metal contents (lead, cadmium, mercury, and manganese) in the blood of adults and the metabolic syndrome ( 107 ). Similarly, generalized additive models can fit spline line models without specifying nodes automatically, allowing the fitting of spline terms like B-splines, natural splines, thin plates, etc., to control the impact of nonlinear confounding factors. This is achieved by fitting curves of corresponding nonlinear terms of pollutants.

Quantile regression (QR) is a regression analysis method that allows modeling different quantiles of the dependent variable, it can handle issues like non-normal error distribution, heteroscedasticity, and outliers. QR can also be used to fit the nonlinear relationship between pollutants and outcomes. For example, a study used linear regression and QR to investigate the relationship between the increase in concentrations of pollutants (PM 10 , PM 2.5 , NO 2 , and O 3 ) and changes in birth weight by using linear regression and QR ( 108 ).

6.2 Bayesian kernel machine regression

BKMR can also handle nonlinear relationships between exposures. Exposure and result interactions are frequently nonlinear, and BKMR is an efficient way to capture these kinds of nonlinear relationships between contaminants. BKMR has been applied to a dataset on metal exposure and neurodevelopment in Bangladeshi children, indicating the presence of non-additive and nonlinear exposure-response functions between metals and a summary measure of psychomotor development ( 109 ).

6.3 Other methods for nonlinear effects

Methods highlighted in other categories can also be employed to address nonlinear problems, including cross-validated ensemble of kernels ( 110 ), TEV, BSSVI, BVSM, MatchAlign, BDS, BKMR-CMA, GriPS, SGP-MPI, BMIM, GL-GPs, Bayesian Tree Ensembles, BKMR-DLM, DLMtree, SPORM, FIN, and FOTP.

7 Holistic approaches to mixture studies

As understanding of environmental pollution deepens and technology advances, the study of a single pollutant can no longer match the analysis of the total health impact of pollutants on the human body. So, researchers are increasingly recognizing the need to analyze complex interactions leading to or exacerbating diseases in mixtures. It is imperative to evaluate the connections between various risk factors and modifying factors from multiple biological dimensions. Similar to exploring the impact of genetic factors on chronic diseases through genome-wide association studies, exposome-wide association studies (EWAS) facilitate the investigation of non-genetic risk factors.

Initially proposed by Wild ( 111 ), EWAS can be represented as P = G + E, where an individual’s phenotype, encompassing health and physical characteristics, is the sum of genetic factors (G) and environmental factors (E) ( 112 ). Rappaport also argue that exposure should not be limited to directly encountered chemicals but should consider a broader range of exposures, such as microbial exposure and life stress ( 113 ). EWAS provides a conceptual framework to understand the complex network of interactions between genes and the environment, as well as their causal relationships with diseases. It facilitates a holistic analysis of the impact of genetics and the environment on human diseases, including DNA sequences, epigenetic DNA modifications, gene expression, metabolite analysis, and the intricate and dynamic interactions among environmental factors, all of which can influence disease phenotypes.

EWAS research does not solely focus on a single exposure but systematically addresses multiple exposures and their mutual influences, thereby increasing the complexity of the study ( 114 , 115 ). For instance, a recent study utilized data from the National Health and Nutrition Examination Survey (NHANES) and retained exposure factors, including 75 laboratory variables (clinical and biological biomarkers of environmental chemical exposure) and 64 lifestyle variables (63 dietary variables and 1 physical exercise variable). This study described the associations between body mass index, nutrition, clinical factors, and environmental factors among adolescents ( 116 ).

8 Conclusion

In conclusion, the statistical analysis of health effects resulting from the multi-pollutant mixtures is a key challenge in current environmental epidemiological research. In this paper, we review multi-pollutant mixtures statistical methods. It is essential to note that while examining scientific ideas, complementary approaches should be taken into account and statistical methods should be selected with the particular scientific problems in mind. By selecting appropriate statistical methods, considering the combined effects of various pollutants, incorporating interdisciplinary collaboration and emerging technological tools, a more accurate and comprehensive assessment of the impact of mixed environmental pollutant exposure on human health can be achieved. This will contribute to the scientific basis for environmental protection and the formulation of public health policies, promoting sustainable development for human health.

To facilitate the application of the discussed statistical methods, we summarize the advantages and limitations of commonly used statistical methods, corresponding R packages, and the above basic statistical analyses were conducted using the NHANES dataset within gWQS package (Please refer to Figure 1 ; Supplementary material for statistical analysis code). This serves as a convenient resource for researchers to directly apply these methods.

Author contributions

GZ: Writing – review & editing, Visualization, Writing – original draft. YW: Visualization, Writing – original draft, Writing – review & editing. KC: Visualization, Writing – review & editing. SH: Visualization, Writing – review & editing. TW: Writing – review & editing, Funding acquisition, Supervision.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This study was supported by the National Natural Science Foundation of China (Grant numbers: 82073674 and 82373692).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh.2024.1377685/full#supplementary-material

1. Holgate, S . Air pollution is a public health emergency. BMJ . (2022) 378:o1664. doi: 10.1136/bmj.o1664

Crossref Full Text | Google Scholar

2. Münzel, T, Hahad, O, Daiber, A, and Landrigan, PJ. Soil and water pollution and human health: what should cardiologists worry about? Cardiovasc Res . (2023) 119:440–9. doi: 10.1093/cvr/cvac082

PubMed Abstract | Crossref Full Text | Google Scholar

3. Boelee, E, Geerling, G, van der Zaan, B, Blauw, A, and Vethaak, AD. Water and health: from environmental pressures to integrated responses. Acta Trop . (2019) 193:217–26. doi: 10.1016/j.actatropica.2019.03.011

4. Tariq, M, Iqbal, B, Khan, I, Khan, AR, Jho, EH, Salam, A, et al. Microplastic contamination in the agricultural soil-mitigation strategies, heavy metals contamination, and impact on human health: a review. Plant Cell Rep . (2024) 43:65. doi: 10.1007/s00299-024-03162-6

5. Fu, Z, and Xi, S. The effects of heavy metals on human metabolism. Toxicol Mech Methods . (2020) 30:167–76. doi: 10.1080/15376516.2019.1701594

6. Zhang, D, and Lu, S. Human exposure to neonicotinoids and the associated health risks: a review. Environ Int . (2022) 163:107201. doi: 10.1016/j.envint.2022.107201

7. Feiteiro, J, Mariana, M, and Cairrão, E. Health toxicity effects of brominated flame retardants: from environmental to human exposure. Environ Pollut . (2021) 285:117475. doi: 10.1016/j.envpol.2021.117475

8. Yu, Y, Quan, X, Wang, H, Zhang, B, Hou, Y, and Su, C. Assessing the health risk of hyperuricemia in participants with persistent organic pollutants exposure – a systematic review and meta-analysis. Ecotoxicol Environ Saf . (2023) 251:114525. doi: 10.1016/j.ecoenv.2023.114525

9. He, JL, Li, GA, Zhu, ZY, Hu, MJ, Wu, HB, Zhu, JL, et al. Associations of exposure to multiple trace elements with the risk of goiter: a case-control study. Environ Pollut . (2021) 288:117739. doi: 10.1016/j.envpol.2021.117739

10. Vennam, S, Georgoulas, S, Khawaja, A, Chua, S, Strouthidis, NG, and Foster, PJ. Heavy metal toxicity and the aetiology of glaucoma. Eye (Lond) . (2020) 34:129–37. doi: 10.1038/s41433-019-0672-z

11. Lim, JT, Tan, YQ, Valeri, L, Lee, J, Geok, PP, Chia, SE, et al. Association between serum heavy metals and prostate cancer risk – a multiple metal analysis. Environ Int . (2019) 132:105109. doi: 10.1016/j.envint.2019.105109

12. Gomes, J, Begum, M, and Kumarathasan, P. Polybrominated diphenyl ether (PBDE) exposure and adverse maternal and infant health outcomes: systematic review. Chemosphere . (2024) 347:140367. doi: 10.1016/j.chemosphere.2023.140367

13. Linares, V, Bellés, M, and Domingo, JL. Human exposure to PBDE and critical evaluation of health hazards. Arch Toxicol . (2015) 89:335–56. doi: 10.1007/s00204-015-1457-1

14. Tian, F, Cai, M, Li, H, Qian, Z(M), Chen, L, Zou, H, et al. Air pollution associated with incident stroke, Poststroke cardiovascular events, and death: a trajectory analysis of a prospective cohort. Neurology . (2022) 99:e2474–84. doi: 10.1212/WNL.0000000000201316

15. Joubert, BR, Kioumourtzoglou, MA, Chamberlain, T, Chen, HY, Gennings, C, Turyk, ME, et al. Powering research through innovative methods for mixtures in epidemiology (PRIME) program: novel and expanded statistical methods. Int J Environ Res Public Health . (2022) 19:1378. doi: 10.3390/ijerph19031378

16. Hamra, GB, and Buckley, JP. Environmental exposure mixtures: questions and methods to address them. Curr Epidemiol Rep . (2018) 5:160–5. doi: 10.1007/s40471-018-0145-0

17. Braun, JM, Gennings, C, Hauser, R, and Webster, TF. What can epidemiological studies tell us about the impact of chemical mixtures on human health? Environ Health Perspect . (2016) 124:A6–9. doi: 10.1289/ehp.1510569

18. Kortenkamp, A . Ten years of mixing cocktails: a review of combination effects of endocrine-disrupting chemicals. Environ Health Perspect . (2007) 115:98–105. doi: 10.1289/ehp.9357

19. Kortenkamp, A . Low dose mixture effects of endocrine disrupters: implications for risk assessment and epidemiology. Int J Androl . (2008) 31:233–40. doi: 10.1111/j.1365-2605.2007.00862.x

20. Gibson, EA, Goldsmith, J, and Kioumourtzoglou, MA. Complex mixtures, complex analyses: an emphasis on interpretable results. Curr Environ Health Rep . (2019) 6:53–61. doi: 10.1007/s40572-019-00229-5

21. Stafoggia, M, Breitner, S, Hampel, R, and Basagaña, X. Statistical approaches to address multi-pollutant mixtures and multiple exposures: the state of the science. Curr Environ Health Rep . (2017) 4:481–90. doi: 10.1007/s40572-017-0162-z

22. Yu, L, Liu, W, Wang, X, Ye, Z, Tan, Q, Qiu, W, et al. A review of practical statistical methods used in epidemiological studies to estimate the health effects of multi-pollutant mixture. Environ Pollut . (2022) 306:119356. doi: 10.1016/j.envpol.2022.119356

23. Hamra, GB, Guha, N, Cohen, A, Laden, F, Raaschou-Nielsen, O, Samet, JM, et al. Outdoor particulate matter exposure and lung cancer: a systematic review and meta-analysis. Environ Health Perspect . (2014) 122:906–11. doi: 10.1289/ehp/1408092

24. Wolff, MS, Engel, SM, Berkowitz, GS, Ye, X, Silva, MJ, Zhu, C, et al. Prenatal phenol and phthalate exposures and birth outcomes. Environ Health Perspect . (2008) 116:1092–7. doi: 10.1289/ehp.11007

25. Varshavsky, JR, Zota, AR, and Woodruff, TJ. A novel method for calculating potency-weighted cumulative phthalates exposure with implications for identifying racial/ethnic disparities among U.S. reproductive-aged women in NHANES 2001–2012. Environ Sci Technol . (2016) 50:10616–24. doi: 10.1021/acs.est.6b00522

26. Zhang, B, Weuve, J, Langa, KM, D’Souza, J, Szpiro, A, Faul, J, et al. Comparison of particulate air pollution from different emission sources and incident dementia in the US. JAMA Intern Med . (2023) 183:1080–9. doi: 10.1001/jamainternmed.2023.3300

27. Carrico, C, Gennings, C, Wheeler, DC, and Factor-Litvak, P. Characterization of weighted quantile sum regression for highly correlated data in a risk analysis setting. J Agric Biol Environ Stat . (2015) 20:100–20. doi: 10.1007/s13253-014-0180-3

28. Tanner, EM, Bornehag, CG, and Gennings, C. Repeated holdout validation for weighted quantile sum regression. MethodsX . (2019) 6:2855–60. doi: 10.1016/j.mex.2019.11.008

29. Zhang, Y, Dong, T, Hu, W, Wang, X, Xu, B, Lin, Z, et al. Association between exposure to a mixture of phenols, pesticides, and phthalates and obesity: comparison of three statistical models. Environ Int . (2019) 123:325–36. doi: 10.1016/j.envint.2018.11.076

30. Wheeler, DC, Rustom, S, Carli, M, Whitehead, TP, Ward, MH, and Metayer, C. Assessment of grouped weighted quantile sum regression for modeling chemical mixtures and Cancer risk. Int J Environ Res Public Health . (2021) 18:504. doi: 10.3390/ijerph18020504

31. Wheeler, DC, Rustom, S, Carli, M, Whitehead, TP, Ward, MH, and Metayer, C. Bayesian group index regression for modeling chemical mixtures and Cancer risk. Int J Environ Res Public Health . (2021) 18:3486. doi: 10.3390/ijerph18073486

32. Gennings, C, Curtin, P, Bello, G, Wright, R, Arora, M, and Austin, C. Lagged WQS regression for mixtures with many components. Environ Res . (2020) 186:109529. doi: 10.1016/j.envres.2020.109529

33. Bello, GA, Arora, M, Austin, C, Horton, MK, Wright, RO, and Gennings, C. Extending the distributed lag model framework to handle chemical mixtures. Environ Res . (2017) 156:253–64. doi: 10.1016/j.envres.2017.03.031

34. Bobb, JF, Valeri, L, Claus Henn, B, Christiani, DC, Wright, RO, Mazumdar, M, et al. Bayesian kernel machine regression for estimating the health effects of multi-pollutant mixtures. Biostatistics . (2015) 16:493–508. doi: 10.1093/biostatistics/kxu058

35. Bobb, JF, Claus Henn, B, Valeri, L, and Coull, BA. Statistical software for analyzing the health effects of multiple concurrent exposures via Bayesian kernel machine regression. Environ Health . (2018) 17:67. doi: 10.1186/s12940-018-0413-y

36. Chen, L, Sun, Q, Peng, S, Tan, T, Mei, G, Chen, H, et al. Associations of blood and urinary heavy metals with rheumatoid arthritis risk among adults in NHANES, 1999–2018. Chemosphere . (2022) 289:133147. doi: 10.1016/j.chemosphere.2021.133147

37. Tan, Y, Fu, Y, Yao, H, Wu, X, Yang, Z, Zeng, H, et al. Relationship between phthalates exposures and hyperuricemia in U.S. general population, a multi-cycle study of NHANES 2007–2016. Sci Total Environ . (2023) 859:160208. doi: 10.1016/j.scitotenv.2022.160208

38. Devick, KL, Bobb, JF, Mazumdar, M, Claus Henn, B, Bellinger, DC, Christiani, DC, et al. Bayesian kernel machine regression-causal mediation analysis. Stat Med . (2022) 41:860–76. doi: 10.1002/sim.9255

39. Wilson, A, Hsu, HL, Chiu, YM, Wright, RO, Wright, RJ, and Coull, BA. Kernel machine and distributed lag models for assessing windows of susceptibility to environmental mixtures in children's health studies. Ann Appl Stat . (2022) 16:1090–110. doi: 10.1214/21-aoas1533

40. Yang, Z, Guo, C, Li, Q, Zhong, Y, Ma, S, Zhou, J, et al. Human health risks estimations from polycyclic aromatic hydrocarbons in serum and their hydroxylated metabolites in paired urine samples. Environ Pollut . (2021) 290:117975. doi: 10.1016/j.envpol.2021.117975

41. Haddad, S, Withey, J, Laparé, S, Law, F, and Krishnan, K. Physiologically-based pharmacokinetic modeling of pyrene in the rat. Environ Toxicol Pharmacol . (1998) 5:245–55. doi: 10.1016/S1382-6689(98)00008-8

42. Viau, C, Diakité, AS, Ruzgyté, A, Tuchweber, B, Blais, C, Bouchard, M, et al. Is 1-hydroxypyrene a reliable bioindicator of measured dietary polycyclic aromatic hydrocarbon under normal conditions? J Chromatogr B . (2002) 778:165–77. doi: 10.1016/S0378-4347(01)00465-0

43. Lei, B, Zhang, K, An, J, Zhang, X, and Yu, Y. Human health risk assessment of multiple contaminants due to consumption of animal-based foods available in the markets of Shanghai, China. Environ Sci Pollut Res . (2015) 22:4434–46. doi: 10.1007/s11356-014-3683-0

44. Gennings, C, Shu, H, Rudén, C, Öberg, M, Lindh, C, Kiviranta, H, et al. Incorporating regulatory guideline values in analysis of epidemiology data. Environ Int . (2018) 120:535–43. doi: 10.1016/j.envint.2018.08.039

45. Mork, D, and Wilson, A. Estimating perinatal critical windows of susceptibility to environmental mixtures via structured Bayesian regression tree pairs. Biometrics . (2023) 79:449–61. doi: 10.1111/biom.13568

46. Feldman, J, and Kowal, DR. A Bayesian framework for generation of fully synthetic mixed datasets. arXiv: Methodology . (2021). doi: 10.48550/arXiv.2102.08255

47. Kowal, DR . Bayesian subset selection and variable importance for interpretable prediction and classification. J Mach Learn Res . (2021) 23:108. doi: 10.48550/arXiv.2104.10150

48. Jin, B, Peruzzi, M, and Dunson, D B. Bag of DAGs: flexible & scalable modeling of Spatiotem-poral dependence , (2021).

Google Scholar

49. Mork, D, and Wilson, A. Treed distributed lag nonlinear models. Biostatistics . (2020) 23:754–71. doi: 10.1093/biostatistics/kxaa051

50. Ferrari, F, and Dunson, DB. Bayesian Factor analysis for inference on interactions. J Am Stat Assoc . (2021) 116:1521–32. doi: 10.1080/01621459.2020.1745813

51. Kowal, DR . Fast, optimal, and targeted predictions using parameterized decision analysis. J Am Stat Assoc . (2020) 117:1875–86. doi: 10.1080/01621459.2021.1891926

52. Dunson, DB, Wu, HT, and Wu, N. Diffusion based Gaussian processes on restricted domains. arXiv: Methodology . (2020). doi: 10.48550/arXiv.2010.07242

53. Peruzzi, M, Banerjee, S, Dunson, D B, and Finley, AO. Grid-parametrize-Split (GriPS) for improved scalable inference in spatial big data analysis , (2021).

54. Antonelli, J, Wilson, A, and Coull, B. Multiple exposure distributed lag models with variable selection. Biostatistics . (2021) 2021:1. doi: 10.1289/isee.2021.O-SY-069

55. Ben Salem, K, and Ben, AA. Principal component analysis (PCA). Tunis Med . (2021) 99:383–9. doi: 10.1201/b10345-2

56. Smit, LA, Lenters, V, Høyer, BB, Lindh, CH, Pedersen, HS, Liermontova, I, et al. Prenatal exposure to environmental chemical contaminants and asthma and eczema in school-age children. Allergy . (2015) 70:653–60. doi: 10.1111/all.12605

57. Bair, E, Hastie, T, Paul, D, and Tibshirani, R. Prediction by supervised principal components. J Am Stat Assoc . (2006) 101:119–37. doi: 10.1198/016214505000000628

58. Roberts, S, and Martin, MA. Using supervised principal components analysis to assess multiple pollutant effects. Environ Health Perspect . (2006) 114:1877–82. doi: 10.1289/ehp.9226

59. Gibson, EA, Zhang, J, Yan, J, Chillrud, L, Benavides, J, Nunez, Y, et al. Principal component pursuit for pattern identification in environmental mixtures. Environ Health Perspect . (2022) 130:117008. doi: 10.1289/EHP10479

60. Paatero, P, and Tapper, U. Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values†. Environmetrics . (1994) 5:111–26. doi: 10.1002/env.3170050203

61. Krall, JR, and Strickland, MJ. Recent approaches to estimate associations between source-specific air pollution and health. Curr Environ Health Rep . (2017) 4:68–78. doi: 10.1007/s40572-017-0124-5

62. Krall, JR, Mulholland, JA, Russell, AG, Balachandran, S, Winquist, A, Tolbert, PE, et al. Associations between source-specific fine particulate matter and emergency department visits for respiratory disease in four U.S. cities. Environ Health Perspect . (2016) 125:97–103. doi: 10.1289/EHP271

63. Dai, L, Bind, MA, Koutrakis, P, Coull, BA, Sparrow, D, Vokonas, PS, et al. Fine particles, genetic pathways, and markers of inflammation and endothelial dysfunction: analysis on particulate species and sources. J Expo Sci Environ Epidemiol . (2016) 26:415–21. doi: 10.1038/jes.2015.83

64. Siponen, T, Yli-Tuomi, T, Aurela, M, Dufva, H, Hillamo, R, Hirvonen, MR, et al. Source-specific fine particulate air pollution and systemic inflammation in ischaemic heart disease patients. Occup Environ Med . (2014) 72:277–83. doi: 10.1136/oemed-2014-102240

65. Roy, A, Lavine, I, Herring, AH, and Dunson, DB. Perturbed factor analysis: accounting for group differences in exposure profiles. Ann Appl Stat . (2021) 15:1386. doi: 10.1214/20-AOAS1435

66. Reid, S, and Tibshirani, R. Sparse regression and marginal testing using cluster prototypes. Biostatistics . (2016) 17:364–76. doi: 10.1093/biostatistics/kxv049

67. Steinley, D . K-means clustering: a half-century synthesis. Br J Math Stat Psychol . (2006) 59:1–34. doi: 10.1348/000711005X48266

68. Ljungman, PL, Wilker, EH, Rice, MB, Austin, E, Schwartz, J, Gold, DR, et al. The impact of multipollutant clusters on the association between fine particulate air pollution and microvascular function. Epidemiology . (2016) 27:194–201. doi: 10.1097/EDE.0000000000000415

69. Coker, E, Liverani, S, Ghosh, JK, Jerrett, M, Beckerman, B, Li, A, et al. Multi-pollutant exposure profiles associated with term low birth weight in Los Angeles County. Environ Int . (2016) 91:1–13. doi: 10.1016/j.envint.2016.02.011

70. Wold, H . Estimation of principal components and related models by iterative least squares. Multivar Anal . (1966):1.

71. Mevik, B-H, and Wehrens, R. The pls package: principal component and partial least squares regression in R. J Stat Softw . (2007) 18:1–23. doi: 10.18637/jss.v018.i02

72. Chun, H, and Keleş, S. Sparse partial least squares regression for simultaneous dimension reduction and variable selection. J R Stat Soc Series B Stat Methodol . (2010) 72:3–25. doi: 10.1111/j.1467-9868.2009.00723.x

73. Agier, L, Portengen, L, Chadeau-Hyam, M, Basagaña, X, Giorgis-Allemand, L, Siroux, V, et al. A systematic comparison of linear regression–based statistical methods to assess Exposome-health associations. Environ Health Perspect . (2016) 124:1848–56. doi: 10.1289/EHP172

74. Sinisi, SE, and Van Der Laan, MJ. Deletion/substitution/addition algorithm in learning with applications in genomics. Stat Appl Genet Mol Biol . (2004) 3:1–38. doi: 10.2202/1544-6115.1069

75. Sun, Z, Tao, Y, Li, S, Ferguson, KK, Meeker, JD, Park, SK, et al. Statistical strategies for constructing health risk models with multiple pollutants and their interactions: possible choices and comparisons. Environ Health . (2013) 12:85. doi: 10.1186/1476-069X-12-85

76. Beckerman, BS, Jerrett, M, Martin, RV, van Donkelaar, A, Ross, Z, and Burnett, RT. Application of the deletion/substitution/addition algorithm to selecting land use regression models for interpolating air pollution measurements in California. Atmos Environ . (2013) 77:172–7. doi: 10.1016/j.atmosenv.2013.04.024

77. Dominici, F, Wang, C, Crainiceanu, C, and Parmigiani, G. Model selection and health effect estimation in environmental epidemiology. Epidemiology . (2008) 19:558–60. doi: 10.1097/EDE.0b013e31817307dc

78. Tibshirani, R . Regression shrinkage and selection via the lasso. J R Stat Soc Series B Stat Methodol . (1996) 58:267–88. doi: 10.1111/j.2517-6161.1996.tb02080.x

79. Zou, H, and Hastie, TJ. Regularization and variable selection via the elastic net. J R Stat Soc Series B Stat Methodol . (2005) 67:301–20. doi: 10.1111/j.1467-9868.2005.00503.x

80. Dai, L, Koutrakis, P, Coull, BA, Sparrow, D, Vokonas, PS, and Schwartz, JD. Use of the adaptive LASSO method to identify PM2.5 components associated with blood pressure in elderly men: the veterans affairs normative aging study. Environ Health Perspect . (2016) 124:120–5. doi: 10.1289/ehp.1409021

81. Lenters, V, Portengen, L, Rignell-Hydbom, A, Jönsson, BAG, Lindh, CH, Piersma, AH, et al. Prenatal phthalate, Perfluoroalkyl acid, and organochlorine exposures and term birth weight in three birth cohorts: multi-pollutant models based on elastic net regression. Environ Health Perspect . (2016) 124:365–72. doi: 10.1289/ehp.1408933

82. Lim, M, and Hastie, T. Learning interactions via hierarchical group-lasso regularization. J Comput Graph Stat . (2015) 24:627–54. doi: 10.1080/10618600.2014.938812

83. Huang, H . Controlling the false discoveries in LASSO. Biometrics . (2017) 73:1102–10. doi: 10.1111/biom.12665

84. Loh, WY . Classification and regression trees. Wiley Interdiscip Rev Data Min Knowl Discov . (2011) 1:14–23. doi: 10.1002/widm.8

85. Biau, G . Analysis of a random forests model. J Mach Learn Res . (2012) 13:1063–95.

86. Smola, AJ, and Schölkopf, B. A tutorial on support vector regression. Stat Comput . (2004) 14:199–222. doi: 10.1023/B:STCO.0000035301.49549.88

87. Peterson, LE . K-nearest neighbor. Scholarpedia . (2009) 4:1883. doi: 10.4249/scholarpedia.1883

88. Webb, GI, Keogh, E, and Miikkulainen, R. Naïve Bayes. Encycl Mach Learn . (2010) 15:713–4. doi: 10.1007/978-0-387-30164-8_576

89. Bishop, CM . Neural networks and their applications. Rev Sci Instrum . (1994) 65:1803–32. doi: 10.1063/1.1144830

90. Margineantu, D D, and Dietterich, T G. Pruning adaptive boosting. ICML , (1997): 211–218.

91. Friedman, JH . Stochastic gradient boosting. Comput Stat Data Anal . (2002) 38:367–78. doi: 10.1016/S0167-9473(01)00065-2

92. Chen, T, and Guestrin, C. Xgboost: a scalable tree boosting system. Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, (2016): 785–794.

93. Ke, G, Meng, Q, Finley, T, Wang, T, Chen, W, Ma, W, et al. Lightgbm: a highly efficient gradient boosting decision tree. Adv Neural Inf Proces Syst . (2017) 30:3146–3154. doi: 10.5555/3294996.3295074

94. Prokhorenkova, L, Gusev, G, Vorobev, A, Dorogush, AV, and Gulin, A. CatBoost: unbiased boosting with categorical features. Adv Neural Inf Proces Syst . (2018) 31. doi: 10.48550/arXiv.1706.09516

95. Lundberg, SM, and Lee, S-I. A unified approach to interpreting model predictions. Adv Neural Inf Proces Syst . (2017) 30:4768–4777. doi: 10.48550/arXiv.1705.07874

96. Li, X, Zhao, Y, Zhang, D, Kuang, L, Huang, H, Chen, W, et al. Development of an interpretable machine learning model associated with heavy metals' exposure to identify coronary heart disease among US adults via SHAP: findings of the US NHANES from 2003 to 2018. Chemosphere . (2023) 311:137039. doi: 10.1016/j.chemosphere.2022.137039

97. Li, W, Huang, G, Tang, N, Lu, P, Jiang, L, Lv, J, et al. Effects of heavy metal exposure on hypertension: a machine learning modeling approach. Chemosphere . (2023) 337:139435. doi: 10.1016/j.chemosphere.2023.139435

98. Mauderly, JL, and Samet, JM. Is there evidence for synergy among air pollutants in causing health effects? Environ Health Perspect . (2009) 117:1–6. doi: 10.1289/ehp.11654

99. Liu, C, Chen, R, Sera, F, Vicedo-Cabrera, AM, Guo, Y, Tong, S, et al. Interactive effects of ambient fine particulate matter and ozone on daily mortality in 372 cities: two stage time series analysis. BMJ . (2023) 383:e075203. doi: 10.1136/bmj-2023-075203

100. Antonelli, J, Mazumdar, M, Bellinger, DC, Christiani, D, Wright, R, and Coull, B. Estimating the health effects of environmental mixtures using Bayesian semiparametric regression and sparsity inducing priors. Ann Appl Stat . (2017) 14:275–75. doi: 10.48550/arXiv.1711.11239

101. Davalos, AD, Luben, TJ, Herring, AH, and Sacks, JD. Current approaches used in epidemiologic studies to examine short-term multipollutant air pollution exposures. Ann Epidemiol . (2017) 27:145–153.e1. doi: 10.1016/j.annepidem.2016.11.016

102. Tomarken, AJ, and Waller, NG. Structural equation modeling: strengths, limitations, and misconceptions. Annu Rev Clin Psychol . (2005) 1:31–65. doi: 10.1146/annurev.clinpsy.1.102803.144239

103. Stein, CM, Morris, NJ, and Nock, NL. Structural equation modeling. Methods Mol Biol . (2012) 850:495–512. doi: 10.1007/978-1-61779-555-8_27

104. Shook-Sa, BE, Chen, DG, and Zhou, H. Using structural equation modeling to assess the links between tobacco smoke exposure, volatile organic compounds, and respiratory function for adolescents aged 6 to 18 in the United States. Int J Environ Res Public Health . (2017) 14:1112. doi: 10.3390/ijerph14101112

105. Shan, Z, Chen, S, Sun, T, Luo, C, Guo, Y, Yu, X, et al. U-shaped association between plasma manganese levels and type 2 diabetes. Environ Health Perspect . (2016) 124:1876–81. doi: 10.1289/EHP176

106. Wang, B, Cheng, M, Yang, S, Qiu, W, Li, W, Zhou, Y, et al. Exposure to acrylamide and reduced heart rate variability: the mediating role of transforming growth factor-β. J Hazard Mater . (2020) 395:122677. doi: 10.1016/j.jhazmat.2020.122677

107. Zhou, J, Meng, X, Deng, L, and Liu, N. Non-linear associations between metabolic syndrome and four typical heavy metals: data from NHANES 2011–2018. Chemosphere . (2022) 291:132953. doi: 10.1016/j.chemosphere.2021.132953

108. Lamichhane, DK, Lee, S-Y, Ahn, K, Kim, KW, Shin, YH, Suh, DI, et al. Quantile regression analysis of the socioeconomic inequalities in air pollution and birth weight. Environ Int . (2020) 142:105875. doi: 10.1016/j.envint.2020.105875

109. Valeri, L, Mazumdar, MM, Bobb, JF, Claus Henn, B, Rodrigues, E, Sharif, OIA, et al. The joint effect of prenatal exposure to metal mixtures on neurodevelopmental outcomes at 20–40 months of age: evidence from rural Bangladesh. Environ Health Perspect . (2017) 125:067015. doi: 10.1289/EHP614

110. Liu, JZ, Deng, W, Lee, J, Lin, PD, Valeri, L, Christiani, DC, et al. A cross-validated ensemble approach to robust hypothesis testing of continuous nonlinear interactions: application to nutrition-environment studies. J Am Stat Assoc . (2022) 117:561–73. doi: 10.1080/01621459.2021.1962889

111. Wild, CP . Complementing the genome with an "exposome": the outstanding challenge of environmental exposure measurement in molecular epidemiology. Cancer Epidemiol Biomarkers Prev . (2005) 14:1847–50. doi: 10.1158/1055-9965.EPI-05-0456

112. Wild, CP . The exposome: from concept to utility. Int J Epidemiol . (2012) 41:24–32. doi: 10.1093/ije/dyr236

113. Rappaport, SM, and Smith, MT. Epidemiology. Environment and disease risks. Science . (2010) 330:460–1. doi: 10.1126/science.1192603

114. Khoury, MJ, and Wacholder, S. Invited commentary: from genome-wide association studies to gene-environment-wide interaction studies--challenges and opportunities. Am J Epidemiol . (2009) 169:227–30. doi: 10.1093/aje/kwn351

115. Thomas, D . Gene--environment-wide association studies: emerging approaches. Nat Rev Genet . (2010) 11:259–72. doi: 10.1038/nrg2764

116. Haddad, N, Andrianou, X, Parrish, C, Oikonomou, S, and Makris, KC. An exposome-wide association study on body mass index in adolescents using the National Health and nutrition examination survey (NHANES) 2003–2004 and 2013–2014 data. Sci Rep . (2022) 12:8856. doi: 10.1038/s41598-022-12459-z

Keywords: health effects, epidemiology, statistical methods, multi-pollutant mixtures, environment

Citation: Zhu G, Wen Y, Cao K, He S and Wang T (2024) A review of common statistical methods for dealing with multiple pollutant mixtures and multiple exposures. Front. Public Health . 12:1377685. doi: 10.3389/fpubh.2024.1377685

Received: 28 January 2024; Accepted: 15 April 2024; Published: 09 May 2024.

Reviewed by:

Copyright © 2024 Zhu, Wen, Cao, He and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tong Wang, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

COMMENTS

  1. Multiple assignment in Python: Assign multiple values or the same value

    You can also swap the values of multiple variables in the same way. See the following article for details: Swap values in a list or values of variables in Python; Assign the same value to multiple variables. You can assign the same value to multiple variables by using = consecutively.

  2. Python Variables

    W3Schools offers free online tutorials, references and exercises in all the major languages of the web. Covering popular subjects like HTML, CSS, JavaScript, Python, SQL, Java, and many, many more.

  3. Assigning multiple variables in one line in Python

    Assign Values to Multiple Variables in One Line. Given above is the mechanism for assigning just variables in Python but it is possible to assign multiple variables at the same time. Python assigns values from right to left. When assigning multiple variables in a single line, different variable names are provided to the left of the assignment ...

  4. Efficient Coding with Python: Mastering Multiple Variable Assignment

    Multiple variable assignment in Python is a testament to the language's design philosophy of simplicity and elegance. By understanding and effectively utilizing this feature, you can write more concise, readable, and Pythonic code. Whether unpacking sequences or swapping values, multiple variable assignment is a technique that can ...

  5. Python's Assignment Operator: Write Robust Assignments

    Here, variable represents a generic Python variable, while expression represents any Python object that you can provide as a concrete value—also known as a literal—or an expression that evaluates to a value. To execute an assignment statement like the above, Python runs the following steps: Evaluate the right-hand expression to produce a concrete value or object.

  6. Multiple assignment and evaluation order in Python

    With the multiple assignment, set initial values as a=0, b=1. In the while loop, both elements are assigned new values (hence called 'multiple' assignment). View it as (a,b) = (b,a+b). So a = b, b = a+b at each iteration of the loop. ... In python what is difference between assign a variable in single line and assigning a variable in multiple ...

  7. Multiple Assignment Syntax in Python

    The multiple assignment syntax, often referred to as tuple unpacking or extended unpacking, is a powerful feature in Python. There are several ways to assign multiple values to variables at once. Let's start with a first example that uses extended unpacking. This syntax is used to assign values from an iterable (in this case, a string) to ...

  8. Variable Assignment

    Variable Assignment. Think of a variable as a name attached to a particular object. In Python, variables need not be declared or defined in advance, as is the case in many other programming languages. To create a variable, you just assign it a value and then start using it. Assignment is done with a single equals sign ( = ).

  9. What is Multiple Assignment in Python and How to use it?

    Multiple assignment in Python is the process of assigning values to multiple variables in a single statement. Instead of writing individual assignment statements for each variable, you can group them together using a single line of code. In this example, the variables x, y, and z are assigned the values 10, 20, and 30, respectively.

  10. Assign Multiple Variables in Python

    Assign the same value to multiple variables. We can assign the same value to multiple variables in Python by using the assignment operator (=) consecutively between the variable names and assigning a single value at the end. For example: a = b = c = 200. The above code is equivalent to. c = 200.

  11. Unpacking And Multiple Assignment in Python on Exercism

    About Unpacking And Multiple Assignment. Unpacking refers to the act of extracting the elements of a collection, such as a list, tuple, or dict, using iteration. Unpacked values can then be assigned to variables within the same statement. A very common example of this behavior is for item in list, where item takes on the value of each list ...

  12. Multiple assignment and tuple unpacking improve Python code readability

    Note: we're using the variable name _ to note that we don't care about sys.argv[0] (the name of our program). Using _ for variables you don't care about is just a convention.. An alternative to slicing. So multiple assignment can be used for avoiding hard coded indexes and it can be used to ensure we're strict about the size of the tuples/iterables we're working with.

  13. How To Assign Values To Variables In Python?

    In Python, assigning multiple values to variables can be done in a single, efficient line of code. This feature streamlines the initializing of several variables at once, making the code more concise and readable. Python allows you to assign values to multiple variables simultaneously by separating each variable and value with commas.

  14. Linux Bash: Multiple Variable Assignment

    Using the multiple variable assignment technique in Bash scripts can make our code look compact and give us the benefit of better performance, particularly when we want to assign multiple variables by the output of expensive command execution. For example, let's say we want to assign seven variables - the calendar week number, year, month ...

  15. Multiple Variable Assignment in JavaScript

    Output: 1, 1, 1, 1. undefined. The destructuring assignment helps in assigning multiple variables with the same value without leaking them outside the function. The fill() method updates all array elements with a static value and returns the modified array. You can read more about fill() here.

  16. Assignment (=)

    Assignment (=) The assignment ( =) operator is used to assign a value to a variable or property. The assignment expression itself has a value, which is the assigned value. This allows multiple assignments to be chained in order to assign a single value to multiple variables.

  17. about Assignment Operators

    Assigning multiple variables. In PowerShell, you can assign values to multiple variables using a single command. The first element of the assignment value is assigned to the first variable, the second element is assigned to the second variable, the third element to the third variable. This is known as multiple assignment.

  18. Proposal: Annotate types in multiple assignment

    In the latest version of Python (3.12.3), type annotation for single variable assignment is available: a: int = 1 However, in some scenarios like when we want to annotate the tuple of variables in return, the syntax of type annotation is invalid: from typing import Any def fun() -> Any: # when hard to annotate the strict type return 1, True a: int, b: bool = fun() # INVALID In this case, I ...

  19. python

    In Python, is it possible to make multiple assignments in the following manner (or, rather, is there a shorthand): import random def random_int(): return random.randint(1, 100) a, b = # for each variable assign the return values from random_int

  20. 9. Classes

    Note how the local assignment (which is default) didn't change scope_test's binding of spam.The nonlocal assignment changed scope_test's binding of spam, and the global assignment changed the module-level binding.. You can also see that there was no previous binding for spam before the global assignment.. 9.3. A First Look at Classes¶. Classes introduce a little bit of new syntax, three new ...

  21. 'It Would Have Been Easier To Look Away': A Journalist's ...

    Roberto Deniz, a Venezuelan investigative journalist, talks about the FRONTLINE & Armando.info documentary 'A Dangerous Assignment,' which examines a shadowy figure at the center of a ...

  22. Assignment 7 (pdf)

    1 Applied Data Analytics SEAS 6767 Assignment 7 1- In a multiple regression problem involving two independent variables, what can you say about their relationship if b1 = +2.0? a) The relationship between X1 and Y is significant. b) The estimated value of Y increases by an average of 2 units for each increase of 1 unit of X1, holding X2 constant.

  23. Object moved to here.

    Object moved to here.

  24. Enhancing heat transfer in rectangular solar air heater channels: a

    Enhancing heat transfer in rectangular solar air heater channels: a numerical exploration of multiple Boomerang-shaped roughness elements with variable gaps Harvindra Singh a Department of Mechanical Engineering, Graphic Era (Deemed to be University), Dehradun, India Correspondence [email protected]

  25. variables

    Assignment in javascript works from right to left. var var1 = var2 = var3 = 1;. If the value of any of these variables is 1 after this statement, then logically it must have started from the right, otherwise the value or var1 and var2 would be undefined.

  26. 'A Dangerous Assignment' Director and Reporter Discuss the ...

    The director and reporter of FRONTLINE and Armando.info's documentary 'A Dangerous Assignment' spoke about the price that journalists pay for investigating the powerful in Venezuela.

  27. Support parallel working with multiple sequences in Dynamics 365 Sales

    Multiple sequences in Dynamics 365 Sales are a powerful capability that can help you improve your customer engagement and collaboration: By connecting multiple sequences to a record, you can optimize your sales process and leverage the skills and expertise of different sellers. By assigning a sequence to a different user than the record owner ...

  28. c

    sample1 = 0; sample2 = 0; specially if you are initializing to a non-zero value. Because, the multiple assignment translates to: sample2 = 0; sample1 = sample2; So instead of 2 initializations you do only one and one copy. The speed up (if any) will be tiny but in embedded case every tiny bit counts!

  29. A review of common statistical methods for dealing with multiple

    2.1 Weighted quantile sum regression. Weighted quantile sum (WQS) regression is a convenient tool for addressing effect estimation problem, the high-dimensional and highly correlated issues among multiple pollutants, particularly among homogenous pollutants ().It is widely used in studies of environmental exposure to multi-pollutant mixtures and enables the identification of high-risk factors.

  30. expression

    The line X = Y = Z = n++ got processed like this: Step 1: Assign the value of n (100) to Z. Step 2: Increment the value of n (n becomes 101) Step 3: Assign the value of Z to Y (Y becomes 100) Step 4: Assign the value of Y to X (X becomes 100) Conclusion: 'Multiple assignments in single line' is a supported style.