More people use ChatGPT to create and publish books.

More and more people are using artificial intelligence program ChatGPT to create books for sale. Despite the slow sales so far, many writers are still worried that the books created by ChatGPT may harm the writing and publishing industry.

1. ChatGPT helps create children’s books.

Until recently, Schickler never thought that he could be a published writer. But after learning about ChatGPT artificial intelligence program, Schickler thinks he has a good chance.

ChatGPT logo appeared in March 2023.

The idea of writing a book seems to be finally possible. Using artificial intelligence software, Schickler created a 30-page illustrated children’s e-book in a few hours. He sold it through Amazon’s self-publishing section in January. The e-book titled The Wise Little Squirrel: A Tale of Saving and Investing made Schickler less than $100. Although this may not sound like much, it is enough to make him want to use the software to create other books.

As of mid-2000, there were more than 200 e-books in Amazon’s Kindle store that said ChatGPT was a writer or co-writer. And this number is rising every day. However, due to the nature of ChatGPT and many writers don’t admit that they have used it, it is almost impossible to fully count how many e-books AI can write.

2. Influence

Some professional writers began to worry about the possible impact of ChatGPT on the book publishing industry.

Mary Rasenberger is the executive director of the Writer Guild. She said: "This is what we really need to worry about. These books will flood into the market and many authors will lose their jobs." Rasenberger pointed out that the industry has a long tradition of ghostwriting, an accepted practice, that is, paying someone to write a book or give a speech in the name of another author. But she is worried that the ability to create with artificial intelligence may turn writing a book from an art into a commodity-a simple raw material that can be bought and sold.

Amazon did not disclose whether it plans to change or study the policy on the use of artificial intelligence or other automatic writing tools by authors. Lindsay Hamilton, a spokesperson for Amazon, said by email that books in stores must comply with its guidelines on "intellectual property rights" and other laws.

3. Quick release

Amazon is by far the biggest seller of physical books and e-books. Its sales in the United States account for more than half, and it is estimated that it accounts for more than 80% of the e-book market. In 2007, Amazon created Kindle Direct Publishing, so that anyone can sell and market a book without looking for a book agent or publishing house. Generally speaking, Amazon allows authors to publish without any supervision. Then, the company shares any money with the writer.

This service has attracted new artificial intelligence assistant writers like Kamil Banc to join Amazon. He told his wife that he could write a book in less than a day. Using artificial intelligence image creator ChatGPT and instructions such as "Write a bedtime story about pink dolphins and teach children how to be honest", Banc published a 27-page inserted book in July.

Since then, Banc has published two more books generated by artificial intelligence, including an adult picture book, and more works are being published. He said, "It’s actually very simple, and I’m surprised at its speed from concept to release.".

Not everyone is impressed by the software. According to reports, Mark Dawson sold millions of books written by himself through Kindle Direct Publishing. He said that ChatGPT-assisted novels were "boring", and boring meant boring.

Advantages-a commendable good quality-are very important in the book industry, and advantages play a role in how to recommend books to other readers. If a book gets a bad review because of its boring writing, it will soon sink to the bottom.

Which schools in Hunan offer artificial intelligence manufacturing specialty in 2023?

Introduction of artificial intelligence manufacturing specialty

training objective

Cultivate high-quality technical and skilled talents who are familiar with the development process of software projects and master the abilities of artificial intelligence analysis, big data storage and processing, algorithm design, data modeling and intelligent analysis of commercial data.

Main course

Java programming language, Hadoop big data management and application, Python data analysis, Spark big data processing, machine learning technology and application, Python web crawler technology, the principle and application of artificial intelligence framework, etc.

Employment direction

Software design and development, software testing, artificial intelligence algorithm design, web crawler, machine learning, image recognition/computer vision, big data processing, big data platform construction and operation, big data analysis and other jobs.

China Supercomputer Force Virus Mutation Prediction, among the top 500 supercomputers in the world, there are 219 in China.

China is making use of the innovation potential of the Internet, big data and artificial intelligence, and will become a superpower in the field of innovation, and has made some outstanding achievements. China hopes to be a world leader in the field of artificial intelligence by 2030, and hopes to become a "major artificial intelligence innovation center in the world".

For example, according to the China Internet Development Report 2019, the independent innovation capability of network information technology in China has been continuously enhanced, and the prototype model of a new generation of billions of supercomputers has been developed. Before that, six supercomputers in China have been combined, and the world’s first national supercomputing Internet was born. In addition, using artificial intelligence, space supercomputers will be built in the future.

For another example, now, China people have mastered the world’s top super technologies, such as high-speed rail, large aircraft, aircraft carrier, exploitation of combustible ice, development technology of rare earth super heavy oil in oil, discovery of dry-hot rock mass, quantum remote communication and space technology.

China’s economy is moving from tradition to emerging.

For example, China’s "New Generation Artificial Intelligence Development Plan" has drawn a grand route to build a $150 billion artificial intelligence industry by 2030. Moreover, some giant companies in China are also guiding billions of dollars to invest in domestic basic scientific research or acquire innovative technologies from abroad, and China’s innovative ability in the field of supercomputing has also been widely concerned and praised by the international industry.

According to the latest issue of the list of the top 500 supercomputers in the world released by the 34th International Supercomputing Conference, as of June 2019, the number of supercomputers in China continued to rank first, reaching 219, while the United States and Japan ranked second (116) and third (29), followed by France (18), Germany (16) and the Netherlands (15). All other countries are expressed in single digits. As the most authoritative supercomputer list in the world, TOP500 is jointly compiled by computer experts from the United States and Germany and published every six months. The number of supercomputers in China has surpassed that in the United States since 2016, and it has been leading ever since.

Up to now, six national supercomputing centers have been built in China, including Shenzhen, Guangzhou, Wuxi, Tianjin, Jinan and Changsha. At present, six supercomputing brains in China have been combined, and the world’s first supercomputing artifact, the supercomputing Internet, has been illuminated to form a "super team" to enhance the core competitiveness of China’s supercomputing field.

Twenty years ago, there was not a single machine in China among the top 500 supercomputers in the world, which also means that supercomputing has become a national heavyweight and the focus of competition among countries, because supercomputing can meet the huge computing needs in scientific research, geological exploration, weather forecast, computational simulation, biopharmaceuticals, gene sequencing, aerospace, image processing and other fields.

For example, at present, the virus is ravaging some cities in China. At this time, scientific research experts need a lot of computing resources to screen drugs against novel coronavirus and predict the mutation of the virus. According to media reports, at present, some supercomputing centers in China are making efforts to help China CDC develop novel coronavirus vaccine, and researchers are using the supercomputing center in China to carry out research work such as target search, new drug screening, lead and test optimization, pharmacology and toxicology.

In fact, in the current global economic environment, manufacturing and mastering the most powerful supercomputers is considered as one of the important standards to measure a country’s scientific and technological strength, because countries and enterprises increasingly need to widely use supercomputers in many fields, such as machinery, new materials, biological environment and energy technology.

According to the analysis of Jeremy Rifkin, the author of the Third Industrial Revolution, the new global economic wave is a new consumption economy driven by new technologies, and it is necessary to achieve a breakthrough in economic growth through economic changes driven by new energy, new technologies, new communications and new transportation and logistics technologies. Rifkin believes that in this wave, China has taken the lead in the field of innovative science and technology economy. (End)

Application of AI Algorithm in Big Data Governance

guide readingThis paper mainly shares the application experience of Datacake and AI algorithm in big data governance. This sharing is divided into five parts. The first part clarifies the relationship between big data and AI. Big data can not only serve AI, but also use AI to optimize its own services. The two are mutually supportive and dependent. The second part introduces the application practice of comprehensive evaluation of big data task health by using AI model, which provides quantitative basis for subsequent data governance; The third part introduces the application practice of using AI model to intelligently recommend the configuration of Spark task operation parameters, and realizes the goal of improving the utilization rate of cloud resources. The fourth part introduces the practice of recommending task execution engine by model intelligence in SQL query scenario; The fifth part looks forward to the application scenarios of AI in the whole life cycle of big data.

Full-text catalog:

1. Big data and AI

2. Health assessment of big data tasks

3. Spark task intelligent parameter adjustment

4. Intelligent selection of SQL task execution engine

5. The application prospect of AI algorithm in big data governance.

Sharing guests | Li Weimin Happy Eggplant algorithm engineer

Edit | |Charles

Production community | |DataFun

01

Big data and AI

It is generally believed that cloud computing collects and stores massive data, thus forming big data; Then, through the mining and learning of big data, the AI model is further formed. This concept acquiesces that big data serves AI, but ignores the fact that AI algorithms can also feed back big data, and there is a two-way, mutual support and dependence relationship between them.

The whole life cycle of big data can be divided into six stages, and each stage faces some problems. Proper use of AI algorithm is helpful to solve these problems.

Data acquisition:This stage will pay more attention to the quality, frequency and security of data collection, such as whether the collected data is complete, whether the speed of data collection is too fast or too slow, whether the collected data has been desensitized or encrypted, etc. At this time, AI can play some roles, such as evaluating the rationality of log collection based on similar applications, and using anomaly detection algorithms to find the sudden increase or decrease of data volume.

Data transmission:This stage pays more attention to the availability, integrity and security of data, and AI algorithm can be used to do some fault diagnosis and intrusion detection.

Data storage:At this stage, we pay more attention to whether the storage structure of data is reasonable, whether the resource occupation is low enough, whether it is safe enough, etc., and we can also use AI algorithm to do some evaluation and optimization.

Data processing:This stage is the most obvious stage that affects and optimizes the benefits. Its core problem is to improve the efficiency of data processing and reduce the consumption of resources. AI can be optimized from multiple starting points.

Data exchange:There is more and more cooperation between enterprises, which will involve the security of data. Algorithms can also be applied in this respect. For example, the popular federated learning can help to share data better and more safely.

Data destruction:Data can’t just be saved and not deleted, so we need to consider when we can delete data and whether it is risky. On the basis of business rules, AI algorithm can assist in judging the timing of deleting data and its associated impact.

Overall, data lifecycle management has three goals:High efficiency and low cost,andsafe. In the past, we relied on experts’ experience to formulate some rules and strategies, which had obvious disadvantages, high cost and low efficiency. Proper use of AI algorithm can avoid these drawbacks and feed back into the construction of big data basic services.

02

Health Assessment of Big Data Tasks

In eggplant technology, several application scenarios that have already landed are first of all the evaluation of the health of big data tasks.

On the big data platform, thousands of tasks are running every day. However, many tasks only stay in the stage of correct output, and no attention is paid to the time-consuming operation and resource consumption of tasks, which leads to inefficiency and waste of resources in many tasks.

Even if some data developers begin to pay attention to task health, it is difficult to accurately evaluate whether the task is healthy or not. Because there are many indicators related to tasks, such as failure rate, time consumption and resource consumption, and there are natural differences in the complexity of different tasks and the volume of data processed, it is obviously unreasonable to simply choose the absolute value of an indicator as the evaluation standard.

Without quantitative task health, it is difficult to determine which tasks are unhealthy and need to be treated, let alone where the problem lies and where to start treatment. Even after treatment, we don’t know how effective it is, and even some indicators improve but others deteriorate.

Demand:Faced with the above problems, we urgently need a quantitative index to accurately reflect the comprehensive health status of the task. The way of making rules manually is inefficient and incomplete, so the power of machine learning model is considered. The goal is that the model can give the quantitative score of the task and its position in the global distribution, and give the main problems and solutions of the task.

To meet this demand, our functional module scheme is to display the key information of all tasks under the owner’s name in the management interface, such as score, task cost, CPU utilization, memory utilization and so on. In this way, the health of the task is clear at a glance, which is convenient for the task owner to do the task management in the future.

Secondly, the model scheme of scoring function is treated as a classification problem. Intuitively, task scoring is obviously a regression problem, and it should be an arbitrary real number between 0 and 100. But in this case, it requires enough samples with scores, and manual labeling is costly and unreliable.

Therefore, we consider transforming the problem into a classification problem, and the classification probability given by the classification model can be further mapped into a real number score. We divide tasks into two categories: good task 1 and bad task 0, which are marked by big data engineers. The so-called good task usually refers to a task that takes short time and consumes less resources under the same task amount and complexity.

The model training process is as follows:

The first is sample preparation. Our samples come from historical task data, and the sample characteristics include running time, resources used, whether execution failed, etc. The sample labels are marked as good and bad by big data engineers according to rules or experience. Then we can train the model. We have tried LR, GBDT, XGboost and other models. Both theory and practice prove that XGboost has better classification effect. The model will eventually output the probability that the task is a "good task". The greater the probability, the higher the final mapped task score will be.

After training, 19 features are selected from the initial nearly 50 original features, which can basically determine whether a task is a good task. For example, for tasks with many failures and tasks with low resource utilization, most of the scores will not be too high, which is basically consistent with the subjective feelings of labor.

After using the model to score tasks, we can see that tasks below 0 to 30 belong to unhealthy tasks that need to be managed urgently; Between 30 and 60 are tasks with acceptable health; Those with a score of 60 or above are tasks with good health and need to maintain the status quo. In this way, with quantitative indicators, the task owner can be guided to actively manage some tasks, thus achieving the goal of reducing costs and increasing efficiency.

After the application of the model, it brought usThe following benefits:

First of all, the task owner can know the health of the tasks under his name, and can know whether the tasks need to be managed through scores and rankings;

(2) Quantitative indicators provide a basis for the follow-up task governance;

(3) How much profit and how much improvement have been achieved after the completion of task governance can also be quantitatively demonstrated through scores.

03

Spark task intelligent parameter adjustment

The second application scenario is the intelligent parameter adjustment of Spark task. A survey by Gartner reveals that 70% of cloud resources consumed by cloud users are unnecessarily wasted. When applying for cloud resources, many people may apply for more resources in order to ensure the successful implementation of the task, which will cause unnecessary waste. There are still many people who use the default configuration when creating tasks, but this is actually not the optimal configuration. If it can be carefully configured, it can achieve very good results, which can not only ensure the operation efficiency, but also ensure the success of the operation, and at the same time save a lot of resources. However, task parameter configuration has high requirements for users. In addition to understanding the meaning of configuration items, it is also necessary to consider the correlation influence between configuration items. Even relying on expert experience, it is difficult to achieve optimization, and the strategy of rule class is difficult to adjust dynamically.

This puts forward a demand, hoping that the model can intelligently recommend the optimal parameter configuration for task operation, so as to improve the utilization rate of task cloud resources while keeping the original running time of the task unchanged.

For the task parameter adjustment function module, our design scheme includes two situations: the first one is that the model should be able to recommend the most suitable configuration parameters according to the historical operation of the task; In the second case, the model should be able to give a reasonable configuration through the analysis of the tasks for which the users are not online.

The next step is to train the model. First, we must determine the output target of the model. There are more than 300 configurable items, and it is impossible to give them all by the model. After testing and investigation, we chose three parameters that have the greatest influence on the task performance, namelyCores core number of executorTotal memoryNumber of instances instances. Each configuration item has its default value and adjustable range. In fact, given a parameter space, the model only needs to find the optimal solution in this space.

In the training stage, there are two schemes to carry out. Option one isLearning experience ruleIn the early stage, the parameters were recommended by rules, and the effect was good after online, so let the model learn this set of rules first, so as to achieve the goal of online quickly. The model training sample is more than 70,000 task configurations previously calculated according to the rules. The sample features the historical operation data of tasks (such as the amount of data processed by tasks, the amount of resources used, the time consumed by tasks, etc.) and some statistical information (such as the average consumption and the maximum consumption in the past seven days, etc.).

We chose the basic model.Multiple regression model with multiple dependent variables. The common regression model is single output, with many independent variables but only one dependent variable. Here we hope to output three parameters, so we adopt a multiple regression model with multiple dependent variables, and its essence is still an LR model.

The above picture shows the theoretical basis of this model. On the left is a multi-label, that is, three configuration items, β is the coefficient of each feature and σ is the error. The training method is the same as unitary regression, and the least square method is used to estimate the sum of squares of all elements in σ.

The advantage of option one is thatYou can learn the rules quickly, and the cost is relatively small.. The drawback is thatIts optimization upper limit can achieve the same good effect as the rule at most, but it will be more difficult to exceed it.

The second scheme is Bayesian optimization, which is similar to reinforcement learning, and tries to find the optimal configuration in parameter space. Bayesian framework is adopted here, because it can make use of the basis of the last attempt, and it will have some transcendental experience in the next attempt, so that it can quickly find a better position. The whole training process will be carried out in a parameter space, and a configuration will be randomly sampled for verification and then run; After the operation, we will pay attention to some indicators, such as utilization rate and cost, to judge whether it is optimal; Then repeat the above steps until the tuning is completed. After the model is trained, there is also a tricky process in the use process. If there is a certain similarity between the new task and the historical task, there is no need to calculate the configuration again, and the previous optimal configuration can be adopted directly.

After the trial and practice of these two schemes, we can see that certain effects have been achieved. For the existing tasks, after modification according to the configuration parameters recommended by the model, more than 80% of the tasks can improve the resource utilization rate by about 15%, and the resource utilization rate of some tasks is even doubled. But both schemes actually exist.defectThe regression model of learning rules has a lower upper limit of optimization; The disadvantage of Bayesian optimization model for global optimization is that it is too expensive to make various attempts.

The future exploration directions are as follows:

Semantic analysis:Spark semantics is rich, including different code structures and operator functions, which are closely related to task parameter configuration and resource consumption. But at present, we only use the historical operation of the task, ignoring the Spark semantics itself, which is a waste of information. The next thing to do is to penetrate into the code level, analyze the operator functions contained in the Spark task, and make more fine-grained tuning accordingly.

Classification tuning:Spark has many application scenarios, such as pure analysis, development, processing, etc. The tuning space and objectives of different scenarios are also different, so it is necessary to do classification tuning.

Engineering optimization:One of the difficulties encountered in practice is that there are few samples and the test cost is high, which requires the cooperation of relevant parties to optimize the project or process.

04

Intelligent selection of SQL task execution engine

The third application scenario is the intelligent choice of SQL query task execution engine.

Background:

(1)SQL query platform is a big data product that most users have the most contact with and the most obvious experience. No matter data analysts, R&D or product managers, they write a lot of SQL every day to get the data they want;

(2) When many people run SQL tasks, they don’t pay attention to the underlying execution engine. For example, Presto is based on pure memory calculation. In some simple query scenarios, its advantage is that the execution speed will be faster, but its disadvantage is that if the storage capacity is not enough, it will be directly hung up; In contrast, Spark is more suitable for executing complex scenes with a large amount of data. Even if oom appears, it will use disk storage, thus avoiding the failure of the task. Therefore, different engines are suitable for different task scenarios.

(3) The effect of 3)SQL query should comprehensively consider the execution time of the task and the consumption of resources, neither can it excessively pursue the query speed without considering the consumption of resources, nor can it affect the query efficiency in order to save resources.

(4) There are three traditional engine selection methods in the industry, namely RBO, CBO and HBO.RBO It is a rule-based optimizer, which is difficult to make rules and has low update frequency.CBO Is based on cost optimization, too much pursuit of cost optimization may lead to the failure of task execution;HBO It is an optimizer based on historical task operation, which is limited to historical data.

In the design of the function module, after the user writes the SQL statement and submits it for execution, the model will automatically judge which engine to use and pop up a window to prompt, and the user will finally decide whether to use the recommended engine for execution.

The overall scheme of the model is to recommend the execution engine based on the SQL statement itself. Because you can see what tables and functions are used from SQL itself, this information directly determines the complexity of SQL, thus affecting the choice of execution engine. Model training samples come from SQL statements run in history, and model labels are marked according to historical execution. For example, tasks with long task execution and huge data volume will be marked as suitable for running on Spark, and the rest are SQL suitable for running on Presto. NLP technology and N-gram plus TF-IDF method are used to extract sample features. The general principle is to extract phrases to see their frequency in sentences, so that keyword groups can be extracted. The vector features generated after this operation are very large. We first select 3000 features by linear model, and then train XGBoost model as the final prediction model.

After training, we can see that the accuracy of the model prediction is still relatively high, about 90% or more.

The online application process of the final model is: after the user submits SQL, the model recommends the execution engine. If it is different from the engine originally selected by the user, the language conversion module will be called to complete the conversion of SQL statements. If the execution fails after switching engines, we will have a failover mechanism to switch back to the user’s original engine to ensure the success of task execution.

The benefit of this practice is that the model can automatically select the most suitable execution engine, and complete the subsequent sentence transformation, without the need for users to do additional learning.

In addition, the engine recommended by the model can basically keep the original execution efficiency unchanged, while reducing the failure rate, so the overall user experience will increase.

Finally, due to the reduction of the unnecessary use of high-cost engines and the decline in the failure rate of task execution, the overall resource cost consumption has decreased.

From the second part to the fourth part, we shared three applications of AI algorithm on big data platform. One of the characteristics that can be seen is thatThe algorithm used is not particularly complicated, but the effect will be very obvious.This inspires us to take the initiative to understand the pain points or optimization space of the big data platform during its operation. After determining the application scenario, we can try to use different machine learning methods to solve these problems, so as to realize the feedback of AI algorithm to big data.

05

Application Prospect of AI Algorithm in Big Data Governance

Finally, we look forward to the application scenario of AI algorithm in big data governance.

The three application scenarios described above focus on the data processing stage. In fact, echoing the relationship between AI and big data in the first chapter, AI can play a better role in the whole data life cycle.

For example, in the data acquisition stage, whether the log is reasonable can be judged; Can do intrusion detection when transmitting; When processing, it can further reduce costs and increase efficiency; Do some work to ensure data security when exchanging; When destroying, we can judge the timing and related influence of destruction. There are many application scenarios of AI in the big data platform, and here it is just a brick to attract jade. It is believed that the mutual support relationship between AI and big data will be more prominent in the future. AI assists big data platforms to collect and process data better, and better data quality can help train better AI models, thus achieving a virtuous circle.

06

Question and answer session

Q1: What kind of rule engine is used? Is it open source?

A1: The so-called parameter tuning rules here are formulated by our big data colleagues based on the experience of manual tuning in the early stage, such as how many minutes the execution time of the task exceeds, or how much data is processed, and how many cores or memory are recommended for the task. This is a set of rules that have been accumulated for a long time, and the effect is better after going online, so we use this set of rules to train our parameter recommendation model.

Q2: Is the dependent variable only the adjustment of parameters? Have you considered the influence of the performance instability of the big data platform on the calculation results?

A2: When making parameter recommendation, we don’t just pursue low cost, otherwise the recommended resources will be low and the task will fail. It is true that the dependent variable only has parameter adjustment, but in order to prevent instability, we have added additional restrictions. First of all, the model features, we choose the average value of a certain period of time rather than the value of an isolated day; Secondly, for the parameters recommended by the model, we will compare the differences between them and the actual configuration values. If the differences are too large, we will adopt the strategy of slow rise and slow down to avoid the failure of the task caused by excessive one-time adjustment.

Q3: Are regression model and Bayesian model used at the same time?

A3: No. Just now, we talked about doing parameter recommendation, and we have used two schemes: learning rules uses regression model; Then the Bayesian optimization framework is used. They are not used at the same time. We have made two attempts. The advantage of the former learning rule is that it can quickly use historical past experience; The second model can find a better or even optimal configuration on the basis of the previous one. The two of them belong to a sequential or progressive relationship, rather than being used at the same time.

Q4: Is the introduction of semantic analysis considered from expanding more features?

A4: Yes. As mentioned just now, the information we use when doing Spark tuning is only its historical implementation, but we haven’t paid attention to the Spark task itself yet. Spark itself actually contains a lot of information, including various operators and stages. If we don’t analyze its semantics, we will lose a lot of information. So our next plan is to analyze the semantics of Spark task and expand more features to assist parameter calculation.

Q5: Will parameter recommendation be unreasonable, which will lead to abnormal or even failed tasks? Then how to reduce abnormal task error and task fluctuation in such a scenario?

A5: If we completely rely on the model, it is possible that it pursues to improve the utilization rate of resources as high as possible. At this time, the recommended parameters may be more radical, such as the memory shrinking from 30g to 5g at once. Therefore, in addition to the model recommendation, we will add additional restrictions, such as how many g the parameter adjustment span can’t exceed, that is, the slow-rising and slow-falling strategy.

Q6: Sigmoid 2022 has some articles related to parameter tuning. Are there any references?

A6: Task intelligent parameter tuning is still a hot research direction, and teams in different fields have adopted different methods and models. Before we started, we investigated many industry methods, including the sigmoid 2022 paper you mentioned. After comparison and practice, we finally tried the two schemes we shared. We will continue to pay attention to the latest progress in this direction and try more methods to improve the recommendation effect.

That’s all for today’s sharing. Thank you.

| Share guests |

| |DataFun New Media Matrix |

| About DataFun| |

Focus on the sharing and communication of big data and artificial intelligence technology applications. Founded in 2017, more than 100+ offline and 100+ online salons, forums and summits have been held in Beijing, Shanghai, Shenzhen, Hangzhou and other cities, and more than 2,000 experts and scholars have been invited to participate in the sharing. Its WeChat official account DataFunTalk has accumulated 900+ original articles, one million+readings and 160,000+accurate fans.

What will be wonderful when the AI chat robot meets the meta-universe?

Please write a paper about the metauniverse with me.
In today’s society, with the development of science and technology and the popularity of the Internet, people’s demand for virtual worlds is growing. As a concept of virtual world, metauniverse has attracted extensive attention and research. Metauniverse is a virtual space that integrates various virtual reality technologies and applications. It allows users to carry out various activities in it, such as socializing, games, education, business and so on. Different from traditional virtual reality, Metauniverse is an open and sustainable virtual world, which can accommodate countless users and applications, and can realize cross-platform and cross-device interconnection. The application prospect of metauniverse is very broad. Socially, it allows users to meet new friends and share life and experience in the virtual world. In terms of games, it can provide a more realistic game experience and let players immerse themselves in it; In education, it can provide a more vivid and intuitive learning experience, so that students can better understand knowledge; In business, it can provide enterprises with brand-new marketing channels and business models and bring more business opportunities. However, the development of the metauniverse also faces some challenges. First of all, technical limitations may affect its development speed and application scope; Secondly, the security and privacy issues in the virtual world need to be fully valued and guaranteed;
Thirdly, the connection and coordination between the virtual world and the real world need to be further improved and optimized. Generally speaking, the metauniverse is a virtual world with great potential, which will bring us more opportunities and challenges. We need to constantly promote technological innovation and application exploration in order to better realize the development and application of the meta-universe.
What will human be like when the metauniverse is realized in the future?
If the meta-universe is truly realized, then human life will undergo great changes, and we will enter a brand-new virtual world and blend with the real world. First of all, the metauniverse will become an important part of people’s lives. In the meta-universe, people can do all kinds of activities, such as socializing, games, education, business and so on. People can establish their own virtual identities and social networks, and interact and communicate with other users. At the same time, the metauniverse will also become an important place for people to study and work, providing people with more flexible and diverse ways of studying and working. Secondly, the metauniverse will change people’s perception and cognition. In the meta-universe, people can experience more vivid and intuitive virtual reality, which will have a far-reaching impact on people’s perception and cognition. People can explore the unknown through the meta-universe, expand their cognitive boundaries and further expand their thinking and imagination. Finally, the metauniverse will bring more opportunities and challenges to people. In the metauniverse, people can create their own virtual worlds and applications to provide services and value for other users. At the same time, the metauniverse will also face some challenges, such as technical restrictions, security and privacy issues, which require us to constantly promote technological innovation and application exploration in order to better realize the development and application of the metauniverse. Generally speaking, the metauniverse will bring great changes and opportunities to mankind, and we need to explore and innovate constantly in order to better adapt to the arrival of this new era.
Which industry do you think is more likely to be the first to enter the meta-universe project?
At present, the game industry is more likely to take the lead in entering the meta-universe project. Because the game industry has accumulated rich experience and technology in virtual reality and augmented reality technology, and the game itself is a virtual world, which can provide a good foundation and support for the development of the meta-universe. In addition, the game industry is also a very open and innovative industry with strong technical strength and innovative ability. Many game companies have begun to explore the application of metauniverse, such as Epic Games’ "Unreal Engine" engine and Roblox’s virtual world platform, etc. These companies have the conditions and capabilities to enter the metauniverse project. Of course, in addition to the game industry, other industries may also take the lead in entering metacosmic projects, such as social media, education, medical care and finance. However, no matter which industry, it is necessary to fully consider the issues of technology, security, privacy and so on, in order to better realize the development and application of the meta-universe.
Can you imagine what the meta-universe class mode in the education industry will be like?
When the education industry enters the meta-universe, we can imagine a brand-new class mode. In the meta-universe, students and teachers can interact through virtual reality technology to create a more realistic and intuitive learning environment. First of all, students can create their own virtual identities in the metauniverse and interact and communicate with other students and teachers. They can have classes in the virtual classroom, have face-to-face communication and interaction with teachers, or participate in virtual lectures, seminars and other activities. Secondly, the learning environment in the metauniverse can be customized according to different disciplines and courses. For example, when studying history courses, students can enter a virtual historical scene, such as ancient Roman cities and Egyptian pyramids, to experience historical culture and events; When studying science courses, students can enter a virtual laboratory and conduct various experimental operations and observations. Finally, the learning content in the metauniverse can be more diversified and personalized. Students can simulate practice and experience through virtual reality technology, so as to better understand and master knowledge; At the same time, teachers can also provide students with more personalized learning content and support according to their interests and needs. Generally speaking, the entry of the education industry into the meta-universe will bring more open, innovative and diversified learning methods and experiences to students and teachers, which will have a far-reaching impact on the future development of education.
Do you think the metauniverse can really be realized? When will mankind enter the real metacosmic era?
Metauniverse is a very advanced and complex virtual world, which needs to be realized based on advanced technology and equipment. At present, technologies such as virtual reality, augmented reality, blockchain and artificial intelligence have been constantly developing and innovating, which provides a good technical foundation and support for the realization of the meta-universe. However, the realization of metauniverse still needs to overcome some problems in technology, security and privacy. For example, how to protect users’ privacy and data security, how to avoid virtual reality dizziness and other issues, all of which need technical and legal support and solutions. At present, many companies and organizations have begun to explore the application of metauniverse, such as Horizon Workrooms of Facebook and Unreal Engine of Epic Games, and these projects have made some progress. It is predicted that in the next few years, the metauniverse will gradually become an important technical and commercial field, and mankind will gradually enter the real metauniverse era. Generally speaking, the realization of the meta-universe needs support and efforts in many aspects, but with the continuous development and innovation of technology, we believe that the realization of the meta-universe is possible and will bring more open, innovative and diversified virtual worlds to mankind.
What role do you think the development of AI will play in realizing the metacosmic era?
AI is one of the key technologies to realize the meta-universe, which plays a very important role in realizing the meta-universe era.
The following are the specific functions of several aspects:
1. Intelligent interaction: AI technology can provide users in the metauniverse with more intelligent and natural interaction methods. For example, technologies such as speech recognition and natural language processing can make users interact with the metauniverse more conveniently.
2. Intelligent management: AI technology can provide more intelligent and automatic management methods for managers in the metauniverse, such as automatic security monitoring and virtual economy management, which can greatly improve the management efficiency and security of the metauniverse.
3. Intelligent content generation: AI technology can provide more intelligent and automatic content generation methods for content creators in the metauniverse, such as automatically generating virtual scenes and virtual characters, which can greatly improve the efficiency and quality of content output in the metauniverse.
4. Intelligent recommendation: AI technology can recommend more personalized content and services to users according to their interests and behavior habits, thus improving users’ satisfaction and experience.
Generally speaking, AI technology will play a very important role in the realization of the meta-universe, which can provide more intelligent, natural and personalized services and experiences for the meta-universe, thus greatly improving the value and attractiveness of the meta-universe.
Can AI make virtual people produce real human-like wisdom?
Although the current AI technology is very advanced, it still cannot fully simulate the wisdom of the human brain, especially in terms of emotion, creativity and intuition. However, with the continuous development and innovation of AI technology, it is possible to realize that virtual people can generate wisdom similar to human brain in the future. At present, some researchers are exploring the use of deep learning, neural networks and other technologies to simulate the wisdom of the human brain. For example, using deep learning technology can make virtual people learn and adapt to the environment automatically, and using neural network technology can make virtual people make complex decisions and reasoning.
At the same time, some researchers are exploring ways to combine human brain with computer, such as transforming human brain signals into computer instructions through brain-computer interface technology, so as to realize more natural and efficient interaction and control. Generally speaking, although the current AI technology can’t fully simulate the wisdom of the human brain, with the continuous development and innovation of technology, we believe that it is possible to realize the virtual human to generate wisdom similar to the human brain in the future.
Do you think AI will accelerate the evolution of the human brain?
The development of AI technology can indeed have an impact on human evolution, but whether it will accelerate the evolution of the human brain needs more research and discussion. On the one hand, AI technology can provide more intelligent and efficient tools and services for human beings, thus making human beings perform better in various fields. For example, in the fields of medical care, education, finance, etc., AI technology has been widely used, which can help human beings better understand and deal with various problems, thus promoting the development and progress of human society. On the other hand, AI technology may also have a negative impact on human evolution. For example, over-reliance on AI technology may weaken human intelligence and creativity, and may even cause human beings to lose their ability to adapt to the natural environment.
In addition, the continuous development of AI technology may also cause some moral and ethical problems, which require us to deal with them more cautiously. Therefore, while making full use of AI technology, we should be vigilant and actively explore how to realize human evolution and development with the help of AI technology.
Your answer is very wise and logical, thank you!
You’re welcome. I will try my best to provide you with accurate and useful information and help. Please feel free to let me know if you have any other questions or need help.

Nashuai: Sane is a threat when he takes the ball. His attitude today makes me very satisfied.

Live broadcast: On March 12th, Bayern beat augsburg 5-3 in this round of Bundesliga, and coach nagel Mann also talked about the team’s performance in an interview after the game.

"Every time we face augsburg, we have a poor start. We must try to equalize before thinking about overtaking. The most important thing is that we have responded well, and the final victory is what we deserve. "

When talking about Sane, who made great contributions in this field, Nashuai said: "He could have handled some situations better in this field, but as long as he holds the ball under his feet, it is a threat to his opponent, and he can take advantage of the speed. Today, most of the attacks have to go through Sane. The most important thing is that he has shown passion, which is very satisfying to me. "

(Chenpi is not orange)

Match of the Day to air without presenter or pundits after Gary Lineker’s suspension

Match of the Day will be broadcast without presenters, pundits or its usual commentators this weekend, after the main host, Gary Lineker, was suspended from the BBC for breaching impartiality guidelines after criticising the government’s asylum policies.

In a dramatic and unexpected escalation of a crisis that has been brewing all week, the corporation took the decision to remove its highest-paid presenter from its flagship football show after he was criticised by Tory MPs and the rightwing media.

His suspension immediately led to displays of solidarity from Lineker’s co-hosts Ian Wright and Alan Shearer, who publicly announced that they would not be turning up to present Saturday’s show.

Faced with an effective strike by its on-screen staff and unable to find willing replacements, the BBC took the unprecedented decision to announce that the Premier League highlights programme would go ahead without any hosts or studio presentation. It will feature only match footage.

A BBC spokesperson said: “Some of our pundits have said that they don’t wish to appear on the programme while we seek to resolve the situation with Gary.

“We understand their position and we have decided that the programme will focus on match action without studio presentation or punditry.”

Late on Friday, commentator Steve Wilson tweeted that the show’s commentary team had decided to pull out of the programme. “As commentators on MOTD, we have decided to step down from tomorrow night’s broadcast,” he said. “We are comforted that football fans who want to watch their teams should still be able to do so, as management can use World Feed commentary if they wish.”

The BBC has the option to use the Premier League’s world feed service.

Ministers put the BBC under pressure after Lineker posted tweets this week criticising Rishi Sunak’s new asylum policy. He likened the language used by ministers about asylum seekers to “that used by Germany in the 30s”, a comparison labelled “offensive” by the home secretary, Suella Braverman.

A BBC spokesperson said Lineker would be off air until an agreement was reached on his future use of social media. If Lineker, who has two years left on his contract to present Match of the Day, refuses to back down, then it opens up the possibility of him leaving the corporation.

Lineker was taken off air on Friday afternoon despite making it clear that he wanted to host this weekend’s edition of the programme.

A BBC spokesperson said: “The BBC has been in extensive discussions with Gary and his team in recent days. We have said that we consider his recent social media activity to be a breach of our guidelines. The BBC has decided that he will step back from presenting Match of the Day until we’ve got an agreed and clear position on his use of social media.

“When it comes to leading our football and sports coverage, Gary is second to none. We have never said that Gary should be an opinion-free zone, or that he can’t have a view on issues that matter to him, but we have said that he should keep well away from taking sides on party political issues or political controversies.”

The Channel 5 News presenter Dan Walker was live on air while exchanging text messages with Lineker. According to Walker, Lineker emphasised that it was the BBC’s decision to take him off screen, saying: “They’ve told me I’ve had to step back.”

Wright, the former England footballer and a regular co-presenter, swiftly said he would not be working on this Saturday’s Match of the Day as a result of Lineker’s suspension. “Everybody knows what Match of the Day means to me, but I’ve told the BBC I won’t be doing it tomorrow. Solidarity,” he tweeted.

Shearer, who had also been booked to appear on Saturday’s show, confirmed that he would not turn up to work – leaving the BBC without any mainstream presenters for one of its best-known programmes.

Regular pundit and former Manchester City defender Micah Richards backed the decision of Wright and Shearer, tweeting: “I was not due to be working on MOTD tomorrow, but if I was, I would find myself taking the same decision that @IanWright0 & @alanshearer have.”

BBC presenter Jermaine Jenas also said he was standing “with his fellow pundits and Gary Lineker”. He tweeted: “Been on air with The One Show. I wasn’t down to be doing Match of the Day tomorrow, but if I was, I would of said no and stood with my fellow pundits and @GaryLineker.”

The presenter and former Arsenal and England player Alex Scott also appeared to rule herself out of presenting Saturday’s edition of Match of the Day amid speculation that she might take over. Scott tweeted a picture of US politician Bernie Sanders saying, “Nah! Not me” alongside the caption: “FYI …”

Mark Chapman also reportedly ruled himself out.

While suspending Lineker for breaching impartiality guidelines, the BBC chair, Richard Sharp, has faced calls to quit in recent weeks. It was recently revealed that Sharp, a previous Tory donor, had made the introductions between Boris Johnson and one of the former prime minister’s distant cousins, who became a guarantor for a substantial loan. Sharp later claimed that he had acted “in good faith” as a “go-between”. Labour said his position was “increasingly untenable”.

The journalist Jon Sopel, who has held several senior positions at the BBC, said: “Lucky there are no producer guidelines on whether you need to declare facilitating an £800k loan to a prime minister while applying for a job as chairman of a broadcasting organisation.”

The former Manchester United and England defender Gary Neville, a commentator for Sky Sports, said the decision was what happened when “you take on the Tories and the system”.

Philippa Childs, head of the entertainment trade union Bectu which represents thousands of BBC workers, said the broadcaster’s decision was “deeply concerning”. “It will give the appearance that they have bowed to political pressure from ministers to take someone off air for disagreeing with the policies of the current government,” she said.

“Taken with the ongoing controversy over the appointment of the BBC chairman, who has a much more important role in upholding the reputation of the BBC, and who has not stepped back while under investigation, it also risks giving the impression of double standards on these issues.”

Lineker’s comments dominated coverage of the government’s new asylum policy and were publicly criticised by a number of cabinet ministers. BBC sources suggested his decision to double down on the comments in further tweets, fanning the story, exasperated senior BBC management.

One issue for the BBC is whether Lineker would be happy to walk away from presenting Match of the Day, given that he has extensive business interests outside the corporation and has shown no indication that he will retract his comments.

Antonio Conte casts doubt on Spurs future as Champions League woes go on

Antonio Conte’s dismal record in the Champions League continued as his Tottenham Hotspur side exited the competition in the last 16 after a dreary 0-0 draw at home to AC Milan on Wednesday.

Tottenham never really threatened to overturn a 1-0 deficit from the first leg and went out with a whimper and boos from the home fans whose patience is running out.

Conte returned to the technical area after missing the last four games following gallbladder surgery but was as subdued as his team on a freezing night in north London.