算法题个人总结第67题三角形的个数这道题采用暴力枚举三角形三条边a,b,c都从1开始到n遍历一遍把abc!n周长不为n的情况排除把三边存在两两相等的情况排除还要符合能构成三角形的条件即两边之和大于第三边此外还要满足abc把以上情况都过滤一遍后即可得到符合条件的三角形。第68题素数判断是否为素数要处理n1的情况当输入M1时1不是素数。第69题杨辉三角1. 用二维数组a[21][21]存储杨辉三角全部初始化为02. 遍历每行把每行第一个数字a[i][0]和最后一个数字a[i][i]设置为13. 从第3行i2开始遍历每行的中间列j1到ji-1中间数字等于其正上方相邻两个数字之和 a[i][j] a[i - 1][j - 1] a[i - 1][j]4. 输出第一行的数字直接输出后面的数字先输出空格再输出数字每行输出后换行一个杨辉三角输出后再换行扇贝英语英语翻译The Transformer model is a neural network architecture based on the attention mechanism and has achieved great success in the field of natural language processing. Unlike traditional recurrent neural networks, Transformers do not rely on step-by-step sequence processing. Instead, they use self-attention mechanisms to process the entire sequencesimultaneously. This architecture not only improves the model’s ability to performparallel computationbut also enables it to capture long-range dependencies more effectively. In machine translation tasks, the Transformer model can dynamically assign attention weights according to the relationships between different words in a sentence, thereby producing moreaccuratetranslations. In addition, Transformer architectures have been widely applied to tasks such as text generation, speech recognition, and even image processing. In recent years, most large-scale pre-trained language models have been built upon the Transformer architecture, which has significantly accelerated the development of artificial intelligence technologies.Transformer模型是一种基于注意力机制的神经网络架构并且在自然语言处理领域取得了巨大的成就。不像传统语言神经网络Transformer不依赖于一步一步的序列处理。相反它们使用独立的注意力机制同时去处理完整的序列。这种架构不仅提高了模型并行计算的能力而且使其更有效地捕捉长距离依赖。在机器翻译任务中Transformer模型可以根据句子中不同单词之间的关系来动态分配注意力权重从而产生更精确的翻译。此外Transformer架构已经广泛应用于如文本生成语言识别甚至图像处理等任务。在近些年来大部分预训练语言模型都基于Transformer架构建立这显著推动了人工智能技术的发展。
算法题打卡14
算法题个人总结第67题三角形的个数这道题采用暴力枚举三角形三条边a,b,c都从1开始到n遍历一遍把abc!n周长不为n的情况排除把三边存在两两相等的情况排除还要符合能构成三角形的条件即两边之和大于第三边此外还要满足abc把以上情况都过滤一遍后即可得到符合条件的三角形。第68题素数判断是否为素数要处理n1的情况当输入M1时1不是素数。第69题杨辉三角1. 用二维数组a[21][21]存储杨辉三角全部初始化为02. 遍历每行把每行第一个数字a[i][0]和最后一个数字a[i][i]设置为13. 从第3行i2开始遍历每行的中间列j1到ji-1中间数字等于其正上方相邻两个数字之和 a[i][j] a[i - 1][j - 1] a[i - 1][j]4. 输出第一行的数字直接输出后面的数字先输出空格再输出数字每行输出后换行一个杨辉三角输出后再换行扇贝英语英语翻译The Transformer model is a neural network architecture based on the attention mechanism and has achieved great success in the field of natural language processing. Unlike traditional recurrent neural networks, Transformers do not rely on step-by-step sequence processing. Instead, they use self-attention mechanisms to process the entire sequencesimultaneously. This architecture not only improves the model’s ability to performparallel computationbut also enables it to capture long-range dependencies more effectively. In machine translation tasks, the Transformer model can dynamically assign attention weights according to the relationships between different words in a sentence, thereby producing moreaccuratetranslations. In addition, Transformer architectures have been widely applied to tasks such as text generation, speech recognition, and even image processing. In recent years, most large-scale pre-trained language models have been built upon the Transformer architecture, which has significantly accelerated the development of artificial intelligence technologies.Transformer模型是一种基于注意力机制的神经网络架构并且在自然语言处理领域取得了巨大的成就。不像传统语言神经网络Transformer不依赖于一步一步的序列处理。相反它们使用独立的注意力机制同时去处理完整的序列。这种架构不仅提高了模型并行计算的能力而且使其更有效地捕捉长距离依赖。在机器翻译任务中Transformer模型可以根据句子中不同单词之间的关系来动态分配注意力权重从而产生更精确的翻译。此外Transformer架构已经广泛应用于如文本生成语言识别甚至图像处理等任务。在近些年来大部分预训练语言模型都基于Transformer架构建立这显著推动了人工智能技术的发展。