A recent paper written by OpenAI which evaluate the impact of Large Language Models (LLM).
I read this paper carefully and would like to share some interesting findings.
https://arxiv.org/pdf/2303.10130v1.pdf
ReadPaper Notes:
https://readpaper.com/pdf-annotate/note?pdfId=4735742443702468609¬eId=1704418301541740032
Generative Pre-trained Transformers (GPTs) as General-Purpose Technologies
General-purpose technologies (GPTs) are technologies that can affect an entire economy (usually at a national or global level). Examples include the steam engine and the internet. The paper discusses the potential future development of Generative Pre-trained Transformers (GPTs) as GPTs.
Key Findings:
Exposure: The paper defines exposure as a measure of whether access to a GPT or GPT-powered system would reduce the time required for a human to perform a specific task by at least 50%.
Wages: Occupations with higher wages generally present with high exposure, meaning well-paying jobs could be vulnerable to GPTs becoming more prevalent in the workplace.
Skills: Occupations requiring programming and writing skills are more susceptible to being influenced by language models, while occupations requiring science and critical thinking skills are less likely to be impacted.
Barriers to Entry: Individuals holding Bachelor’s, Master’s, and professional degrees are more exposed to GPTs and GPT-powered software than those without formal educational credentials. Jobs with the least exposure require the longest training, potentially offering a lower payoff.
What are general-purpose technologies (GPTs)?
General-purpose technologies (GPTs) are technologies that can affect an entire economy (usually at a national or global level) due to their pervasive nature, continuous improvement over time, and the development of significant co-inventions and spillovers. Examples of GPTs include the steam engine, electricity, and the internet.
The paper argues that GPTs, such as Generative Pre-trained Transformers (GPTs), have the potential to become highly influential in various economic sectors. To fully realize their potential, GPTs must be incorporated into broader systems, which may initially face co-invention barriers that could impede their rapid diffusion into economic applications.
Predicting the need for human oversight in tasks where GPT capabilities equal or surpass human levels is challenging. The requirement for human supervision may initially slow down the adoption and diffusion rate of GPTs. However, over time, users of GPTs and GPT-powered systems are likely to become increasingly acquainted with the technology, particularly in terms of understanding when and how to trust its outputs.
The paper also highlights that if "GPTs are GPTs," the eventual trajectory of LLM development and application may be challenging for policymakers to predict and regulate, due to the far-reaching and transformative impact of such technologies on various aspects of society and the economy.
This paper provides many counter-intuitive results.
Exposure: a measure of whether access to a GPT or GPT-powered system would reduce the time required for a human to perform a specific DWA or complete a task by at least 50%.
Tasks with higher exposure would be much easier and timesaving with GPTs.
Occupations with higher wages generally present with high exposure.
This means well-paying jobs could be vulnerable to GPTs become more prevalent in the workplace.
Our findings indicate that the importance of science and critical thinking skills are strongly negatively associated with exposure, suggesting that occupations requiring these skills are less likely to be impacted by current language models. Conversely, programming and writing skills show a strong positive association with exposure, implying that occupations involving these skills are more susceptible to being influenced by language models.
This suggests that programming and writing skills will become even more important in the age of GPTs, as these skills are positively correlated with LLM exposure. Or maybe, jobs required programming and writing skills are greatly affected by GPTs. However, roles that rely heavily on science and critical thinking skills are less likely to be affected by GPTs, indicating that workers in these fields may be relatively safe from displacement.
Our analysis suggests that individuals holding Bachelor’s, Master’s, and professional degrees are more exposed to GPTs and GPT-powered software than those without formal educational credentials.
We observe that the jobs with the least exposure require the longest training, potentially offering a lower payoff (in terms of median income) once competency is achieved. Conversely, jobs with no on-the-job training required or only internship/residency required appear to yield higher income but are more exposed to GPT.
No exposure (E0) if: • there is no or minimal reduction in the time required to complete the activity or task while maintaining equivalent quality or • using any combination of the capabilities described in accordance with the below criteria would decrease the quality of the activity/task output.
Direct exposure (E1) if: • using solely the theoretical LLM or GPT-4 described via ChatGPT or the OpenAI playground can decrease the time required to complete the DWA or task by at least half (50%).
LLM+ Exposed (E2) if: • access to the LLM alone would not reduce the time required to complete the activity/task by at least half, but • additional software could be developed on to the LLM that could reduce the time it takes to complete the specific activity/task with quality by at least half. Among these systems, we count access to image generation systems.
Table 4: Occupations with the highest exposure according to each measurement.
The final row lists the occupations with the highest $\sigma^2$ value, indicating that they had the most variability in vulnerability-prediction.
Exposure percentages indicate the share of an occupation’s task that are exposed to GPTs (𝛼) or GPT-powered software (𝛽 and 𝜁 ), where exposure is defined as driving a reduction in time it takes to complete the task by at least 50%.
As such, occupations listed in this table are those where we estimate that GPTs and GPT-powered software are able to save workers a significant amount of time completing a large share of their tasks, but it does not necessarily suggest that their tasks can be fully automated by these technologies.
Definition of 𝛼, 𝛽 and 𝜁
𝛼 = E1 corresponds to E1 in the exposure rubric defined above. It represents the lower bound of the proportion of exposed tasks within an occupation.
𝛽 = E1 + 0.5E2, which is the sum of E1 and 0.5E2. The weight of 0.5 on E2 accounts for exposure to the technology when deploying it via complementary tools and applications that require additional investment.
𝜁 = E1 + E2, the sum of E1 and E2. This is an upper bound of exposure that provides an assessment of the maximal exposure to GPT and GPT-powered software.
Table 5: OLS Regression Results of Exposure Measures on O*NET Skills
Programming has the largest exposure to 𝛼 and 𝛽.
Mathematics has the largest exposure to 𝜁.
A recent paper written by OpenAI which evaluate the impact of Large Language Models (LLM).
I read this paper carefully and would like to share some interesting findings.
https://arxiv.org/pdf/2303.10130v1.pdf
ReadPaper Notes:
https://readpaper.com/pdf-annotate/note?pdfId=4735742443702468609¬eId=1704418301541740032
Generative Pre-trained Transformers (GPTs) as General-Purpose Technologies
General-purpose technologies (GPTs) are technologies that can affect an entire economy (usually at a national or global level). Examples include the steam engine and the internet. The paper discusses the potential future development of Generative Pre-trained Transformers (GPTs) as GPTs.
Key Findings:
Exposure: The paper defines exposure as a measure of whether access to a GPT or GPT-powered system would reduce the time required for a human to perform a specific task by at least 50%.
Wages: Occupations with higher wages generally present with high exposure, meaning well-paying jobs could be vulnerable to GPTs becoming more prevalent in the workplace.
Skills: Occupations requiring programming and writing skills are more susceptible to being influenced by language models, while occupations requiring science and critical thinking skills are less likely to be impacted.
Barriers to Entry: Individuals holding Bachelor’s, Master’s, and professional degrees are more exposed to GPTs and GPT-powered software than those without formal educational credentials. Jobs with the least exposure require the longest training, potentially offering a lower payoff.
What are general-purpose technologies (GPTs)?
General-purpose technologies (GPTs) are technologies that can affect an entire economy (usually at a national or global level) due to their pervasive nature, continuous improvement over time, and the development of significant co-inventions and spillovers. Examples of GPTs include the steam engine, electricity, and the internet.
The paper argues that GPTs, such as Generative Pre-trained Transformers (GPTs), have the potential to become highly influential in various economic sectors. To fully realize their potential, GPTs must be incorporated into broader systems, which may initially face co-invention barriers that could impede their rapid diffusion into economic applications.
Predicting the need for human oversight in tasks where GPT capabilities equal or surpass human levels is challenging. The requirement for human supervision may initially slow down the adoption and diffusion rate of GPTs. However, over time, users of GPTs and GPT-powered systems are likely to become increasingly acquainted with the technology, particularly in terms of understanding when and how to trust its outputs.
The paper also highlights that if "GPTs are GPTs," the eventual trajectory of LLM development and application may be challenging for policymakers to predict and regulate, due to the far-reaching and transformative impact of such technologies on various aspects of society and the economy.
This paper provides many counter-intuitive results.
Exposure: a measure of whether access to a GPT or GPT-powered system would reduce the time required for a human to perform a specific DWA or complete a task by at least 50%.
Tasks with higher exposure would be much easier and timesaving with GPTs.
Occupations with higher wages generally present with high exposure.
This means well-paying jobs could be vulnerable to GPTs become more prevalent in the workplace.
Our findings indicate that the importance of science and critical thinking skills are strongly negatively associated with exposure, suggesting that occupations requiring these skills are less likely to be impacted by current language models. Conversely, programming and writing skills show a strong positive association with exposure, implying that occupations involving these skills are more susceptible to being influenced by language models.
This suggests that programming and writing skills will become even more important in the age of GPTs, as these skills are positively correlated with LLM exposure. Or maybe, jobs required programming and writing skills are greatly affected by GPTs. However, roles that rely heavily on science and critical thinking skills are less likely to be affected by GPTs, indicating that workers in these fields may be relatively safe from displacement.
Our analysis suggests that individuals holding Bachelor’s, Master’s, and professional degrees are more exposed to GPTs and GPT-powered software than those without formal educational credentials.
We observe that the jobs with the least exposure require the longest training, potentially offering a lower payoff (in terms of median income) once competency is achieved. Conversely, jobs with no on-the-job training required or only internship/residency required appear to yield higher income but are more exposed to GPT.
No exposure (E0) if: • there is no or minimal reduction in the time required to complete the activity or task while maintaining equivalent quality or • using any combination of the capabilities described in accordance with the below criteria would decrease the quality of the activity/task output.
Direct exposure (E1) if: • using solely the theoretical LLM or GPT-4 described via ChatGPT or the OpenAI playground can decrease the time required to complete the DWA or task by at least half (50%).
LLM+ Exposed (E2) if: • access to the LLM alone would not reduce the time required to complete the activity/task by at least half, but • additional software could be developed on to the LLM that could reduce the time it takes to complete the specific activity/task with quality by at least half. Among these systems, we count access to image generation systems.
Table 4: Occupations with the highest exposure according to each measurement.
The final row lists the occupations with the highest $\sigma^2$ value, indicating that they had the most variability in vulnerability-prediction.
Exposure percentages indicate the share of an occupation’s task that are exposed to GPTs (𝛼) or GPT-powered software (𝛽 and 𝜁 ), where exposure is defined as driving a reduction in time it takes to complete the task by at least 50%.
As such, occupations listed in this table are those where we estimate that GPTs and GPT-powered software are able to save workers a significant amount of time completing a large share of their tasks, but it does not necessarily suggest that their tasks can be fully automated by these technologies.
Definition of 𝛼, 𝛽 and 𝜁
𝛼 = E1 corresponds to E1 in the exposure rubric defined above. It represents the lower bound of the proportion of exposed tasks within an occupation.
𝛽 = E1 + 0.5E2, which is the sum of E1 and 0.5E2. The weight of 0.5 on E2 accounts for exposure to the technology when deploying it via complementary tools and applications that require additional investment.
𝜁 = E1 + E2, the sum of E1 and E2. This is an upper bound of exposure that provides an assessment of the maximal exposure to GPT and GPT-powered software.
Table 5: OLS Regression Results of Exposure Measures on O*NET Skills
Programming has the largest exposure to 𝛼 and 𝛽.
Mathematics has the largest exposure to 𝜁.
Share Dialog
Share Dialog

Subscribe to backdoor

Subscribe to backdoor
<100 subscribers
<100 subscribers
No activity yet